Monday, November 21, 2016

Artificial-intelligence system surfs web to improve its performance

Artificial intelligence has always been a topic of debate when it comes to how it performs and how it can perform better. One of the new ideas thought off is just how vast the internet is and how helpful it can be. The only problem is that the information available is in plain text, so extracting this information for analytical tests that can help humans see correlations between differnt things is difficult to do and time consuming, we all know that when it comes to computer science and machine learning, the time it takes for computation is very important and each computation must be efficient in its accuracy and speed. In general information extraction has always been more of a challenge in the realm of computer science and artificial intelligence just because there have not been any methods created to make this method work any better. Not until recently have there been students at the Massachusetts Institute of Technology along with researches that have brought the concept of machine learning to the next level. We have to realize that most machine learning work mostly by combining training examples that are provided to them by the human annotators. They just look for patterns that match keywords that are once again in memory, meaning they are truly limited in the information they can access because of their initial training. This new idea that has been brought up concerns machine learning and extends its capability by allowing the machine learning system to assign what they call a "confidence score" to each classification that the system makes. If this this score is too low then the system will automatically go searching the web for information that might contain the information that the system is trying to extract.

After this initial process the system then tries to extract all relevant data from the new texts that it found from its search query and combines this information with its information that it extracted in the initial attempt to find information. The cool thing that I am amazed by is the fact that all of this computing that is being done by the system is the result of machine learning, the system learns how to search, look at the accuracy of its search, and then be able to comprehend if the information is relevant to its initial task at hand, and then to be able to fuse the two together to complete its extraction. These researchers tested this system on many pieces of data that are extensive like mass school shootings and food contamination. The system was tasked with extracting things like names of shooters and schools, and in the case of food contamination, food type and type of contamination. The system was trained to respond to keywords and teach terms. In each trial the system was able to extract about 10 new documents from the web that related to the 300 documents that it was initially trained on. After seeing that this system actually works, these researchers compared their systems performance to that of conventional machine learning systems techniques. Basically one each test, the newly created system by the MIT students and researchers was able to out perform its older predecessors by almost 10 percent every time.
I find this amazing and it can only get better from here. These researchers are even thinking about adding millions and millions of articles to this system so that it can even perform on a wider scope. Machine learning just blows my mind away every time I hear about it. We are on the brink of allowing machines to do things that we can't do and faster. The best thing is we can train them on exactly what we want them to do so that they do only the task that they asked for. I can only imagine that this will get even better in the near future, and hopefully it will be for the better of science. 

Reference Links:
http://image.slidesharecdn.com/machinelearningfordummies-140401055817-phpapp01/95/machine-learning-for-dummies-4-638.jpg?cb=1396332069
http://news.mit.edu/2016/artificial-intelligence-system-surfs-web-improve-performance-1110
http://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2016/MIT-Webaid-Learning_0.jpg?itok=Sli1Qec0

Wednesday, November 16, 2016

Enabling Wireless Virtual Reality Through Programmed Phased Arrays

Virtual reality is the newly known technology that allows a person to see graphics at a higher resolution enough that it mimics a real world environment. The only problem is that VR is limited to some sort because it requires a direct connection to some hardware in order to be able to process such a high graphics resolution that the user is trying to watch. Believe it or not, the reality is that wires suck, you can trip on them and we are so far into the digital age to require something that has been available for a while. Well to no surprise of mine, MIT researchers have developed a way so that wires will hopefully no longer be required. They're calling this functional prototype, "MoVR," which will basically allow any person to utilize any VR headset through a wireless connection, the name even sounds like "Move" and "VR," implying that you can wander around with this newly innovative technology. Through several tests administered by researchers at MIT's computer science department, they have discovered that this technology can actually enable communication up to several megabits per second, which converted to bits comes out to several billion bits per second. These tests utilized many high-frequency signals of radio waves that are called "millimeter waves," which are guessed by many experts to one day provide some very fast and powerful smartphones that would run on 5g.

One smartphone by the name of the HTC Vive was able to utilize this VR technology. Though most of the experts have stated that the "MoVR" technology is able to be utilized with any headset. Just imagine that you can play multiplayer video games with friends all wirelessly, even using many other applications that are capable with this VR device. Now you may ask how this tech actually works. The antenna is a two directional and what they call a "phased array" antenna which is a relative array of many antennas that feeds each antenna through a pattern of signals that can correct undesired direction. Below is an image of the "MoVR" antenna that is almost the size of a credit card, and can fit almost anywhere.
While there are several complications with this newly built tech, like the fact that it is very hard to keep a strong signal as the user has to be directly in front of the antenna or else even the slightest blockage like for example putting your hand in front of the antenna can cause signal loss. MoVR has the ability to revert each signal toward the user and correct the signal, but this solution is not fully refined and needs more work. While this is close to being something that can be bought hopefully, the technology is a bit ahead of itself. But the idea is awesome and requires extensive programming by computer societies utilizing special algorithms too that the angle can be corrected automatically. So it is a mixture of programming and hardware design that has allowed this innovation to become a feasible  reality. I am a fan of VR gaming and hope to be able to buy such a piece of tech soon if it is available. It is amazing to see where tech is headed and where it is going to be in the near future. 


Reference Links:
https://en.wikipedia.org/wiki/Phased_array
http://news.mit.edu/2016/enabling-wireless-virtual-reality-1114
https://en.wikipedia.org/wiki/Virtual_reality
https://i.ytimg.com/vi/6GA5oIy9ONo/maxresdefault.jpg

Wednesday, November 9, 2016

Faster programs, easier programming: New system lets nonexperts optimize programs that run on multiprocessor chips.

It is no surprise that MIT keeps coming up with new and innovative methods to create faster, easier running, and more efficient machines and programs. One of the best schools for engineering and with leading engineers, they have a mass amount of technology in their disposal to work magic. This article speaks on a new system that was created to allow a non expert to have the ability and capability to modify and optimize any program that can run on a multiprocessor chip. We have dynamic programming which can allow techniques to give us accurate solutions to problems in several fields like economics, genomic analysis which is the analyzation of a genome, and many other fields. The problem is that adapting this same idea to computers with multiple cores would require a level of programming that these occupations just don't have the expertise in at all to be able to execute such a task. That is why researchers from MIT and Stony Brook University came up with this system that can allow a user to directly tell their programs what to exactly do in very simple terms. The system will then read these "terms" and create a version of that program that will actually run on multiple core chips. It also offers the guarantee that it will give you the same results as you would get from a single core processor, only difference in this case is that it is much faster because in this instance you have multiple processors instead of one.

Now when you think about this idea, it can almost be applied to any idea. The idea that I will focus on is the idea of "rapid search." This method surprisingly utilizes something that we recently learned in our CS150 class, it is called recursion. This version of recursion will basically allow smaller division of a specific matrix, this will be generated by the program named "Bellmania" which will take that metric, perform some operation on it, and then outsource it to some other subroutines that will also complete some task. This process will basically be the heart of recursion, an operation is performed, outsourced, performed, and so it repeats recursively. The amazing thing about this new way of programming is that Bellmania can perform this task in 15 min, where as a programmer would take hours and is more error prone because it is hand coded, where as Bellmania guarantees accuracy. This idea literally is amazing, it will allow what is usually hand optimized code, which is error prone because of its dense complexity to be done in a matter of minuted and with no errors whatsoever. This will make any field 100% better when performing some task, instead of having someone optimize this code, you can just use Bellmania and be carefree about any possible problems. This applies to biology, computation in computer science, network traffic, and even cybersecurity.

Hopefully this newly created idea can become a public thing that is able to be sold as a program, because this can allow regular users to optimize their code for free lance work and many other ways. It has the potential to be something that is revolutionary in the world of computer science.

Reference Links:
http://news.mit.edu/2016/faster-programs-easier-programming-multiprocessor-chips-1107
http://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2016/MIT-Laymans-Parellel_0.jpg?itok=qsLqjTEi
http://m.eet.com/media/1196773/f4.8.jpg

Monday, October 31, 2016

Making computers explain themselves

During the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, a couple of researchers working in MIT's Computer Science and Artificial Intelligence Laboratory also known as (CSAIL) presented a new process that will basically train neural networks to not only provide specific predictions and classifications for there decisions but also a coherent and adequate rationale as to why they made that certain decision. Neural networks are named this because they have the ability to mimic with an approximation the structure of the brain. They are in a basic form composed of a big number of nodes that act most similarly to neurons, and have the capability to do only simple computations but are connected to each other in a unit of complex and dense networks. The process is called "deep learning," where training data is added to a networks existing input codes which will then modify it and feed it to other codes. This process is sequential and goes on as long as data is fed into the network. In order to enable a certain interpretation of a neural nets decision making process, the CSAIL researchers at MIT divided the net into two separate modules that have two different operations. The first module extracts specific segments of test from a certain training data, and then the segments are scored in accordance with their length and their coherence. The second module performs the production and classification tasks.

As such, the data set gives out an accurate test of the CSAIL researchers' program and system. If the first module has successfully extracted a certain amount of phrases, and the second module has connected them with their specific and correct ratings, then that basically means that the system has presented the same basis for judgement that a human annotator did. In some unpublished work, this new technology is being utilized on various test of pathology reports on breast biopsies, where the system learned to extract a test explaining the bases for a pathologists' diagnoses. They are going as far as even using it to analyze mammograms of patients, where the first module extrapolates certain parts of images instead of just segments of the part of text. We can see that having a model that can make predictions and tell you why it is making those certain decisions is an important direction we need to head in.
Reference links:
http://news.mit.edu/2016/making-computers-explain-themselves-machine-learning-1028
http://cs231n.github.io/assets/nn1/neural_net2.jpeg
https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.graphic1.jpg

Monday, October 24, 2016

Automated screening for childhood communication disorders

On September 22, 2016, researches at the Computer Science and Artificial Intelligence Laboratory at MIT and Massachusetts General Hospital's Institute of Health Professions created a computer system that could help screen and determine if young children have speech and language disorders and potentially provide information for actual diagnosis. The system would be able to analyze audio recordings of children's performances on a basic storytelling test that is standardized. The kids would be presented with a series of images accompanied by a narrative, they would then be asked to retell the story in their own words.

The benefit of this is that it can probably done on a tablet or iPhone using very simplistic tools. This would mean that tests would be widely available at low costs and would be a great addition to society. The researches tested the system's performance using a standard measure called area under the curse, otherwise known as integration in mathematics. This test describes the trade off between identifying members of the population that have a particular disorder, this test would limit false positive tests of a disorder. This means that in the medical literature, a diagnostic test with an area under the curve of 0.7 is in general considered accurate enough to be of good use. During three tests in the clinic, the researcher's system ranged between 0.74 to 0.86, which is in general very good and speaks highly of the system that was created. In order to build the new system, Guttag and Jen Gong, students of the graduate level, one in electrical engineering and the other in computer science, used machine learning. This is where a computer searches a large set of training data for any particular patterns that might correspond to specific classifications, in this specific case, they were looking for patterns and classifications of speech and language disorders. This will in turn help clinicians make more precise diagnoses because they have a system of reassurance added on top of their expertise in their respective field. They also indicate that speech impediments that result from an anatomical physical aspect such as a cleft palate, speech disorders and language disorders both have neurological bases.
Overall this is an amazing creation and could possibly revolutionize the world of clinical medicine. We can further improve the accuracy of a doctors diagnose with the help of technology. Although this creation is not in its final stages yet, the prime test has been passed, and that is the fact that it works. Now comes the next stage where they have to finalize and release the product. 

Reference links:
http://news.mit.edu/2016/automated-screening-childhood-communication-disorders-0922
http://tryengineering.org/sites/default/files/styles/medium/public/majors/169938739-technician-checks-the-voltage.jpg?itok=VASIttqH
http://image.slidesharecdn.com/pptforspeechandlanguage-140821022007-phpapp01/95/speech-and-language-disorders-2-638.jpg?cb=1408587690
http://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2016/MIT-Speech-Impairments_0.jpg?itok=ET3cTzEm


Friday, October 21, 2016

Computer graphics (computer science)

Computer graphics is a sub-field of computer science which basically studies methods for digitally synthesizing and changing visual content. Some people think that graphics only studies 3 dimensional visuals, but computer graphics also encompasses 2 dimensional computer graphics and image processing. It also focuses more on the processing part of graphics rather than just the aesthetics aspect of most graphics that we know of. Some connected studies to computer graphics are applied mathematics, computational geometry, computational topology, computer vision, image processing, information visualization, scientific visualization. Some applications of computer graphics include digital art, special effects, video games, visual effects.

There is a subfield/needed understanding of geometry in computer graphics. This is because most figures appear different on the exterior so boundary representations are commonly used like polygonal meshes which are also known as subdivision surfaces. It is important that even fluids and surface texture is taken into consideration when representing objects. The animation part also focuses on how the objects move or deform over time.
Finally, rendering is the most important because that is when simulation takes place. There is light transport or non-photorealistic rendering. Transporting describes how illumination in one scene gets to another. Scattering is how light interacts with the surface at a given point and shading is how material properties vary across each different type of designated surface. Overall computer graphics has been revolutionary in the gaming industry and is worth billions of dollars. It has brought newer processing and advanced graphics that would not have been possible in the past. 


Reference links:
https://en.wikipedia.org/wiki/Computer_graphics_(computer_science)
https://upload.wikimedia.org/wikipedia/commons/8/8e/Blender_2.45_screenshot.jpg\
http://www.nyit.edu/files/degrees/CAS_Degree_ComputerGraphicsBFA_HeroSmall.jpg
http://saksagan.ceng.metu.edu.tr/courses/ceng477//images/face.png

Friday, October 14, 2016

Algorithms

An algorithm is a self-constrained step-by-step set of operations or companions to be executed. Algorithms can perform mathematical calculations, data processing, and/or reasoning tasks that are automated. An example of an algorithm is Euclid's algorithm which was created to determine the maximum common divisor of two integers. An example of this algorithm is below.
 No human is capable of writing all these numbers by hand to find the nth terms of each number. This is where algorithms take care of this and compute within milliseconds. The concept of an algorithm can also be used to define the notion of decidability. This specific notion is essential in explaining how formal systems come into creation starting from a very small set of axioms which are basically statements taken to be true, along with their rules. In basic logic, the time it takes an algorithm to compete cannot be measured because it is not related to a customary physical dimension that actually exists. Algorithms are essential to the way computers process their data. Many computer programs contain algorithms that can calculate an employee paycheck or even a simple task like printing a students report card. Algorithms can be expressed with many kinds of notations, this includes natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables. 
We can see that algorithms are the roots of computing. They are used for everything, even in the new technologies and in other fields of science like biology. They help make life easier for most us but we don't really realize the work that is happening behind the screen. With algorithms and programming skills, anything is possible in our world. There are literally no limits. Below is just another example animation of an algorithm that sorts data. 

Reference links:
https://upload.wikimedia.org/wikipedia/commons/6/6a/Sorting_quicksort_anim.gif
https://en.wikipedia.org/wiki/Algorithm
https://upload.wikimedia.org/wikipedia/commons/thumb/d/db/Euclid_flowchart.svg/330px-Euclid_flowchart.svg.png
https://upload.wikimedia.org/wikipedia/commons/4/44/Euclid%27s_algorithm_structured_blocks_1.png
https://en.wikipedia.org/wiki/Axiom