During the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, a couple of researchers working in MIT's Computer Science and Artificial Intelligence Laboratory also known as (CSAIL) presented a new process that will basically train neural networks to not only provide specific predictions and classifications for there decisions but also a coherent and adequate rationale as to why they made that certain decision. Neural networks are named this because they have the ability to mimic with an approximation the structure of the brain. They are in a basic form composed of a big number of nodes that act most similarly to neurons, and have the capability to do only simple computations but are connected to each other in a unit of complex and dense networks. The process is called "deep learning," where training data is added to a networks existing input codes which will then modify it and feed it to other codes. This process is sequential and goes on as long as data is fed into the network. In order to enable a certain interpretation of a neural nets decision making process, the CSAIL researchers at MIT divided the net into two separate modules that have two different operations. The first module extracts specific segments of test from a certain training data, and then the segments are scored in accordance with their length and their coherence. The second module performs the production and classification tasks.
As such, the data set gives out an accurate test of the CSAIL researchers' program and system. If the first module has successfully extracted a certain amount of phrases, and the second module has connected them with their specific and correct ratings, then that basically means that the system has presented the same basis for judgement that a human annotator did. In some unpublished work, this new technology is being utilized on various test of pathology reports on breast biopsies, where the system learned to extract a test explaining the bases for a pathologists' diagnoses. They are going as far as even using it to analyze mammograms of patients, where the first module extrapolates certain parts of images instead of just segments of the part of text. We can see that having a model that can make predictions and tell you why it is making those certain decisions is an important direction we need to head in.
As such, the data set gives out an accurate test of the CSAIL researchers' program and system. If the first module has successfully extracted a certain amount of phrases, and the second module has connected them with their specific and correct ratings, then that basically means that the system has presented the same basis for judgement that a human annotator did. In some unpublished work, this new technology is being utilized on various test of pathology reports on breast biopsies, where the system learned to extract a test explaining the bases for a pathologists' diagnoses. They are going as far as even using it to analyze mammograms of patients, where the first module extrapolates certain parts of images instead of just segments of the part of text. We can see that having a model that can make predictions and tell you why it is making those certain decisions is an important direction we need to head in.
Reference links:
http://news.mit.edu/2016/making-computers-explain-themselves-machine-learning-1028
http://cs231n.github.io/assets/nn1/neural_net2.jpeg
https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.graphic1.jpg
No comments:
Post a Comment