Monday, November 21, 2016

Artificial-intelligence system surfs web to improve its performance

Artificial intelligence has always been a topic of debate when it comes to how it performs and how it can perform better. One of the new ideas thought off is just how vast the internet is and how helpful it can be. The only problem is that the information available is in plain text, so extracting this information for analytical tests that can help humans see correlations between differnt things is difficult to do and time consuming, we all know that when it comes to computer science and machine learning, the time it takes for computation is very important and each computation must be efficient in its accuracy and speed. In general information extraction has always been more of a challenge in the realm of computer science and artificial intelligence just because there have not been any methods created to make this method work any better. Not until recently have there been students at the Massachusetts Institute of Technology along with researches that have brought the concept of machine learning to the next level. We have to realize that most machine learning work mostly by combining training examples that are provided to them by the human annotators. They just look for patterns that match keywords that are once again in memory, meaning they are truly limited in the information they can access because of their initial training. This new idea that has been brought up concerns machine learning and extends its capability by allowing the machine learning system to assign what they call a "confidence score" to each classification that the system makes. If this this score is too low then the system will automatically go searching the web for information that might contain the information that the system is trying to extract.

After this initial process the system then tries to extract all relevant data from the new texts that it found from its search query and combines this information with its information that it extracted in the initial attempt to find information. The cool thing that I am amazed by is the fact that all of this computing that is being done by the system is the result of machine learning, the system learns how to search, look at the accuracy of its search, and then be able to comprehend if the information is relevant to its initial task at hand, and then to be able to fuse the two together to complete its extraction. These researchers tested this system on many pieces of data that are extensive like mass school shootings and food contamination. The system was tasked with extracting things like names of shooters and schools, and in the case of food contamination, food type and type of contamination. The system was trained to respond to keywords and teach terms. In each trial the system was able to extract about 10 new documents from the web that related to the 300 documents that it was initially trained on. After seeing that this system actually works, these researchers compared their systems performance to that of conventional machine learning systems techniques. Basically one each test, the newly created system by the MIT students and researchers was able to out perform its older predecessors by almost 10 percent every time.
I find this amazing and it can only get better from here. These researchers are even thinking about adding millions and millions of articles to this system so that it can even perform on a wider scope. Machine learning just blows my mind away every time I hear about it. We are on the brink of allowing machines to do things that we can't do and faster. The best thing is we can train them on exactly what we want them to do so that they do only the task that they asked for. I can only imagine that this will get even better in the near future, and hopefully it will be for the better of science. 

Reference Links:
http://image.slidesharecdn.com/machinelearningfordummies-140401055817-phpapp01/95/machine-learning-for-dummies-4-638.jpg?cb=1396332069
http://news.mit.edu/2016/artificial-intelligence-system-surfs-web-improve-performance-1110
http://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2016/MIT-Webaid-Learning_0.jpg?itok=Sli1Qec0

Wednesday, November 16, 2016

Enabling Wireless Virtual Reality Through Programmed Phased Arrays

Virtual reality is the newly known technology that allows a person to see graphics at a higher resolution enough that it mimics a real world environment. The only problem is that VR is limited to some sort because it requires a direct connection to some hardware in order to be able to process such a high graphics resolution that the user is trying to watch. Believe it or not, the reality is that wires suck, you can trip on them and we are so far into the digital age to require something that has been available for a while. Well to no surprise of mine, MIT researchers have developed a way so that wires will hopefully no longer be required. They're calling this functional prototype, "MoVR," which will basically allow any person to utilize any VR headset through a wireless connection, the name even sounds like "Move" and "VR," implying that you can wander around with this newly innovative technology. Through several tests administered by researchers at MIT's computer science department, they have discovered that this technology can actually enable communication up to several megabits per second, which converted to bits comes out to several billion bits per second. These tests utilized many high-frequency signals of radio waves that are called "millimeter waves," which are guessed by many experts to one day provide some very fast and powerful smartphones that would run on 5g.

One smartphone by the name of the HTC Vive was able to utilize this VR technology. Though most of the experts have stated that the "MoVR" technology is able to be utilized with any headset. Just imagine that you can play multiplayer video games with friends all wirelessly, even using many other applications that are capable with this VR device. Now you may ask how this tech actually works. The antenna is a two directional and what they call a "phased array" antenna which is a relative array of many antennas that feeds each antenna through a pattern of signals that can correct undesired direction. Below is an image of the "MoVR" antenna that is almost the size of a credit card, and can fit almost anywhere.
While there are several complications with this newly built tech, like the fact that it is very hard to keep a strong signal as the user has to be directly in front of the antenna or else even the slightest blockage like for example putting your hand in front of the antenna can cause signal loss. MoVR has the ability to revert each signal toward the user and correct the signal, but this solution is not fully refined and needs more work. While this is close to being something that can be bought hopefully, the technology is a bit ahead of itself. But the idea is awesome and requires extensive programming by computer societies utilizing special algorithms too that the angle can be corrected automatically. So it is a mixture of programming and hardware design that has allowed this innovation to become a feasible  reality. I am a fan of VR gaming and hope to be able to buy such a piece of tech soon if it is available. It is amazing to see where tech is headed and where it is going to be in the near future. 


Reference Links:
https://en.wikipedia.org/wiki/Phased_array
http://news.mit.edu/2016/enabling-wireless-virtual-reality-1114
https://en.wikipedia.org/wiki/Virtual_reality
https://i.ytimg.com/vi/6GA5oIy9ONo/maxresdefault.jpg

Wednesday, November 9, 2016

Faster programs, easier programming: New system lets nonexperts optimize programs that run on multiprocessor chips.

It is no surprise that MIT keeps coming up with new and innovative methods to create faster, easier running, and more efficient machines and programs. One of the best schools for engineering and with leading engineers, they have a mass amount of technology in their disposal to work magic. This article speaks on a new system that was created to allow a non expert to have the ability and capability to modify and optimize any program that can run on a multiprocessor chip. We have dynamic programming which can allow techniques to give us accurate solutions to problems in several fields like economics, genomic analysis which is the analyzation of a genome, and many other fields. The problem is that adapting this same idea to computers with multiple cores would require a level of programming that these occupations just don't have the expertise in at all to be able to execute such a task. That is why researchers from MIT and Stony Brook University came up with this system that can allow a user to directly tell their programs what to exactly do in very simple terms. The system will then read these "terms" and create a version of that program that will actually run on multiple core chips. It also offers the guarantee that it will give you the same results as you would get from a single core processor, only difference in this case is that it is much faster because in this instance you have multiple processors instead of one.

Now when you think about this idea, it can almost be applied to any idea. The idea that I will focus on is the idea of "rapid search." This method surprisingly utilizes something that we recently learned in our CS150 class, it is called recursion. This version of recursion will basically allow smaller division of a specific matrix, this will be generated by the program named "Bellmania" which will take that metric, perform some operation on it, and then outsource it to some other subroutines that will also complete some task. This process will basically be the heart of recursion, an operation is performed, outsourced, performed, and so it repeats recursively. The amazing thing about this new way of programming is that Bellmania can perform this task in 15 min, where as a programmer would take hours and is more error prone because it is hand coded, where as Bellmania guarantees accuracy. This idea literally is amazing, it will allow what is usually hand optimized code, which is error prone because of its dense complexity to be done in a matter of minuted and with no errors whatsoever. This will make any field 100% better when performing some task, instead of having someone optimize this code, you can just use Bellmania and be carefree about any possible problems. This applies to biology, computation in computer science, network traffic, and even cybersecurity.

Hopefully this newly created idea can become a public thing that is able to be sold as a program, because this can allow regular users to optimize their code for free lance work and many other ways. It has the potential to be something that is revolutionary in the world of computer science.

Reference Links:
http://news.mit.edu/2016/faster-programs-easier-programming-multiprocessor-chips-1107
http://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2016/MIT-Laymans-Parellel_0.jpg?itok=qsLqjTEi
http://m.eet.com/media/1196773/f4.8.jpg

Monday, October 31, 2016

Making computers explain themselves

During the Association for Computational Linguistics' Conference on Empirical Methods in Natural Language Processing, a couple of researchers working in MIT's Computer Science and Artificial Intelligence Laboratory also known as (CSAIL) presented a new process that will basically train neural networks to not only provide specific predictions and classifications for there decisions but also a coherent and adequate rationale as to why they made that certain decision. Neural networks are named this because they have the ability to mimic with an approximation the structure of the brain. They are in a basic form composed of a big number of nodes that act most similarly to neurons, and have the capability to do only simple computations but are connected to each other in a unit of complex and dense networks. The process is called "deep learning," where training data is added to a networks existing input codes which will then modify it and feed it to other codes. This process is sequential and goes on as long as data is fed into the network. In order to enable a certain interpretation of a neural nets decision making process, the CSAIL researchers at MIT divided the net into two separate modules that have two different operations. The first module extracts specific segments of test from a certain training data, and then the segments are scored in accordance with their length and their coherence. The second module performs the production and classification tasks.

As such, the data set gives out an accurate test of the CSAIL researchers' program and system. If the first module has successfully extracted a certain amount of phrases, and the second module has connected them with their specific and correct ratings, then that basically means that the system has presented the same basis for judgement that a human annotator did. In some unpublished work, this new technology is being utilized on various test of pathology reports on breast biopsies, where the system learned to extract a test explaining the bases for a pathologists' diagnoses. They are going as far as even using it to analyze mammograms of patients, where the first module extrapolates certain parts of images instead of just segments of the part of text. We can see that having a model that can make predictions and tell you why it is making those certain decisions is an important direction we need to head in.
Reference links:
http://news.mit.edu/2016/making-computers-explain-themselves-machine-learning-1028
http://cs231n.github.io/assets/nn1/neural_net2.jpeg
https://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/cs11/report.graphic1.jpg

Monday, October 24, 2016

Automated screening for childhood communication disorders

On September 22, 2016, researches at the Computer Science and Artificial Intelligence Laboratory at MIT and Massachusetts General Hospital's Institute of Health Professions created a computer system that could help screen and determine if young children have speech and language disorders and potentially provide information for actual diagnosis. The system would be able to analyze audio recordings of children's performances on a basic storytelling test that is standardized. The kids would be presented with a series of images accompanied by a narrative, they would then be asked to retell the story in their own words.

The benefit of this is that it can probably done on a tablet or iPhone using very simplistic tools. This would mean that tests would be widely available at low costs and would be a great addition to society. The researches tested the system's performance using a standard measure called area under the curse, otherwise known as integration in mathematics. This test describes the trade off between identifying members of the population that have a particular disorder, this test would limit false positive tests of a disorder. This means that in the medical literature, a diagnostic test with an area under the curve of 0.7 is in general considered accurate enough to be of good use. During three tests in the clinic, the researcher's system ranged between 0.74 to 0.86, which is in general very good and speaks highly of the system that was created. In order to build the new system, Guttag and Jen Gong, students of the graduate level, one in electrical engineering and the other in computer science, used machine learning. This is where a computer searches a large set of training data for any particular patterns that might correspond to specific classifications, in this specific case, they were looking for patterns and classifications of speech and language disorders. This will in turn help clinicians make more precise diagnoses because they have a system of reassurance added on top of their expertise in their respective field. They also indicate that speech impediments that result from an anatomical physical aspect such as a cleft palate, speech disorders and language disorders both have neurological bases.
Overall this is an amazing creation and could possibly revolutionize the world of clinical medicine. We can further improve the accuracy of a doctors diagnose with the help of technology. Although this creation is not in its final stages yet, the prime test has been passed, and that is the fact that it works. Now comes the next stage where they have to finalize and release the product. 

Reference links:
http://news.mit.edu/2016/automated-screening-childhood-communication-disorders-0922
http://tryengineering.org/sites/default/files/styles/medium/public/majors/169938739-technician-checks-the-voltage.jpg?itok=VASIttqH
http://image.slidesharecdn.com/pptforspeechandlanguage-140821022007-phpapp01/95/speech-and-language-disorders-2-638.jpg?cb=1408587690
http://news.mit.edu/sites/mit.edu.newsoffice/files/styles/news_article_image_top_slideshow/public/images/2016/MIT-Speech-Impairments_0.jpg?itok=ET3cTzEm


Friday, October 21, 2016

Computer graphics (computer science)

Computer graphics is a sub-field of computer science which basically studies methods for digitally synthesizing and changing visual content. Some people think that graphics only studies 3 dimensional visuals, but computer graphics also encompasses 2 dimensional computer graphics and image processing. It also focuses more on the processing part of graphics rather than just the aesthetics aspect of most graphics that we know of. Some connected studies to computer graphics are applied mathematics, computational geometry, computational topology, computer vision, image processing, information visualization, scientific visualization. Some applications of computer graphics include digital art, special effects, video games, visual effects.

There is a subfield/needed understanding of geometry in computer graphics. This is because most figures appear different on the exterior so boundary representations are commonly used like polygonal meshes which are also known as subdivision surfaces. It is important that even fluids and surface texture is taken into consideration when representing objects. The animation part also focuses on how the objects move or deform over time.
Finally, rendering is the most important because that is when simulation takes place. There is light transport or non-photorealistic rendering. Transporting describes how illumination in one scene gets to another. Scattering is how light interacts with the surface at a given point and shading is how material properties vary across each different type of designated surface. Overall computer graphics has been revolutionary in the gaming industry and is worth billions of dollars. It has brought newer processing and advanced graphics that would not have been possible in the past. 


Reference links:
https://en.wikipedia.org/wiki/Computer_graphics_(computer_science)
https://upload.wikimedia.org/wikipedia/commons/8/8e/Blender_2.45_screenshot.jpg\
http://www.nyit.edu/files/degrees/CAS_Degree_ComputerGraphicsBFA_HeroSmall.jpg
http://saksagan.ceng.metu.edu.tr/courses/ceng477//images/face.png

Friday, October 14, 2016

Algorithms

An algorithm is a self-constrained step-by-step set of operations or companions to be executed. Algorithms can perform mathematical calculations, data processing, and/or reasoning tasks that are automated. An example of an algorithm is Euclid's algorithm which was created to determine the maximum common divisor of two integers. An example of this algorithm is below.
 No human is capable of writing all these numbers by hand to find the nth terms of each number. This is where algorithms take care of this and compute within milliseconds. The concept of an algorithm can also be used to define the notion of decidability. This specific notion is essential in explaining how formal systems come into creation starting from a very small set of axioms which are basically statements taken to be true, along with their rules. In basic logic, the time it takes an algorithm to compete cannot be measured because it is not related to a customary physical dimension that actually exists. Algorithms are essential to the way computers process their data. Many computer programs contain algorithms that can calculate an employee paycheck or even a simple task like printing a students report card. Algorithms can be expressed with many kinds of notations, this includes natural languages, pseudocode, flowcharts, drakon-charts, programming languages or control tables. 
We can see that algorithms are the roots of computing. They are used for everything, even in the new technologies and in other fields of science like biology. They help make life easier for most us but we don't really realize the work that is happening behind the screen. With algorithms and programming skills, anything is possible in our world. There are literally no limits. Below is just another example animation of an algorithm that sorts data. 

Reference links:
https://upload.wikimedia.org/wikipedia/commons/6/6a/Sorting_quicksort_anim.gif
https://en.wikipedia.org/wiki/Algorithm
https://upload.wikimedia.org/wikipedia/commons/thumb/d/db/Euclid_flowchart.svg/330px-Euclid_flowchart.svg.png
https://upload.wikimedia.org/wikipedia/commons/4/44/Euclid%27s_algorithm_structured_blocks_1.png
https://en.wikipedia.org/wiki/Axiom

Tuesday, October 4, 2016

Bioinformatics: Computer Science

Bioinformatics is basically an interdisciplinary field that develops methods and software tools to analyze and understand biological data. This field combines many sciences, including computer science, statistics, mathematics and engineering in order to interpret the data. It is both an umbrella term for the bigger body of biological studies that uses computer programming as part of the methods to refer to analysis in the field genomics.

A common use of bioinformatics includes the identification of genes and nucleotides. This method is used to better understand the basis of disease, adaptations, and agricultural species. In a more general explanation, bioinformatics tries to understand principles within nucleic acid and protein sequences. Computers became essential in molecular biology when protein sequences finally became available after Frederick Sanger solved the sequence of insulin in the early 1950's.



Computing has come along way in the field of biology. Currently computational biology has been able to help map and analyze  DNA and protein sequences, build models of vital organs, and even validate and create pharmeceutical drugs. Even though such advancements have been made, the future still holds many bright ideas and creations to come.

Bioinformatics is such a revolutionary field that because of the hard work of many professionals, diseases like cancer now have a higher chance of being managed and understood. Thanks to computer science, we know have fields like bioinformatics that can ultimately change the way we cure diseases and the way we study them.


References:
http://graduatedegrees.online.njit.edu/mscs-resources/mscs-infographics/bioinformatics-how-computer-science-is-changing-biology/
https://en.wikipedia.org/wiki/Bioinformatics
http://www.novozymes.com/en/-/media/Novozymes/en/about-us/our-business/industrial-biotechnology/basic-technologies/PublishingImages/Bioinformatics.png?la=en&hash=275354E1041BA57F24B7ABC0828D6B1E2A19597F
https://www.stcorp.nl/media/pages/57/bioinformatics.jpg
https://www.acsu.buffalo.edu/~yijunsun/lab/images/publicationKeywords.png


Thursday, September 29, 2016

Data Structures

In the world of Computer Science, a Data Structure is a basic way of organizing amounts of data in a computer so that it can be used efficiently. The structures of data can implement one or more particular abstract data types (ADT), these specify the operations that can be performed on a particular structure and also the computational complexity of those specific operations. When compared, a data structure is a concrete operation specified and provided by the ADT.

Different kinds of structure of data are suited to different kinds of applications of usage. Others are highly specified to do specific tasks. Data structures provide a mean to manage large amounts of in order to have databases and internet indexing services which are basically methods of indexing website content and webpage content. Efficient data is critical in certain efficient algorithms. Data structures can also be used to organize the storage and retrieval of most information stored in main memory and secondary memory.


There are several types of data structures who are generally built on top of simpler and more primitive data types.
  1. An array is a number elects in specific order, usually all the same type
  2. A linked list is a linear collection of data elements of any type
  3. A record is a value that contains other values, it is an aggregate data structure
  4. A union is a data structure which specifies what number of permitted primitive types are allowed to be stored
  5. A class is a data structure that contains data fields, as well as methods which operate based on the contents of the record. It is usually used in the context of object oriented programming, where records are known more as old or plain data structures to distinguish them from specific classes.

    Most programming languages feature some sort of library that allows data structures to be reused by different programs. Examples are C++ Standard Template Library , the Java Collections Framework, and Microsoft's .NET Framework.


    References:
    https://en.wikipedia.org/wiki/Web_indexing
    https://en.wikipedia.org/wiki/Data_structure
    http://venus.ifca.unican.es/Rintro/_images/dataStructuresNew.png
    https://www.cs.rochester.edu/u/brown/172/pics/data_structures_01.jpg


Friday, September 23, 2016

The Turing Machine

The Turing Machine utilizes a set of rules to manipulate symbols on a strip of tape, to be more precise it is a mathematical model of computation at the core of its function. It is also called an abstract machine which is a theoretical model of a computer. It was created by Alan Turing, the father of Computer Science. The machine is simple but it can simulate any computer algorithm logic.

The turing machine is is the general example of the modern day CPU(Central Processing Unit) which controls all data manipulation by the computer. It is basically capable of enumerating any arbitrary subset of any valid string of an alphabet, also these strings are part of a recursively enumerable set. Any turing machine that can simulate another turing machine is known as a UTM or a universal Turing machine. Through these abstract properties, many insights can be yielded into Computer Science and complexity theory.

The turing machine consists of a tape, which has divided cells, with each one next to the other. The second is head which can read and write symbols on the tape and also move the tape right and left one. The third is the state register, which basically stores the state of the turing machine. The fourth is a finite table of instructions which with given information can assign a sequence of things to do.

Alan Turing created the equivalent of the modern day computer, with brilliant accuracy. There have been many machines that have tried to copy the turing machine, most are within its capabilities but they utilized steps and hardware that was first used by turing himself which makes him the first to do it. Turing created computer science and therefore started a wave of one of the most important industries that exist today.
References:
https://en.wikipedia.org/wiki/Turing_machine
https://en.wikipedia.org/wiki/Abstract_machine
http://www.aturingmachine.com/turingFull560.jpg
http://oldblog.computationalcomplexity.org/media/turing-machine.jpg

Friday, September 16, 2016

Robotic Software

Robots have been around for many years and continue to amaze us with the capabilities and operations they can perform. We never really realize how much work it takes to make a stationary object begin to move and perform tasks.

This is where robot software comes along. It is probably one of the most important pieces to creating a fully functional robot. The software uses a set of coded commands or instructional material that tells a robot what to do and when to do it. Due to the unlimited possibilities of coding, there are software systems and frameworks that make programming robots even easier. Some robotic software even aims at creating devices that are intelligent. Some of the tasks these devices can perform are feedback loops, control pathfinding, data filtering, and locating.

Software utilized to control the robots consists of a set of instructions and data objects, known as program flow. For example, Go to Jig1 is an instruction for the robot to a positional data named Jig1.
Below is an example of a plain english coded set of instructions describing what each task means.
Example code: Move to P1 (a general safe position)
                         Move to P2 (an approach to P3)
                         Move to P3 (a position to pick the object)
                         Close gripper
                         Move to P4 (an approach to P5)
                         Move to P5 (a position to place the object)
                         Open gripper
                         Move to P1 and finish
Most of these instructions have a start and a finish. The instructions coded by the programmer have to be very specific because a robot is just like a computer, it does not understand implicit commands. Implicit commands can lead to errors, and in the case of robots in factories which are large in size size and move fast, it can even cause severe injuries. So all strings of code are pre-checked before they are finalized and programmed to the robot. 
While this is just a small piece of information on this topic, it gives you a good idea about how robots work and the process it takes to program them correctly. This occupation is one of the hardest still to 
work in because most workers have to be well versed in Electrical Engineering, Mechanical Engineering, and Software Engineering to be able to work in this industry of robotics.
References:
https://en.wikipedia.org/wiki/Robot_software
http://images.mentalfloss.com/sites/default/files/styles/article_640x430/public/113252131-565x376_6.jpg
http://www.ciros-engineering.com/fileadmin/Templates/CIROS/IMG/articles/Roboterprogrammiersprache_RAPID.jpg

Friday, September 9, 2016

Java vs Javascript: Similarities and Differences

What is the exact difference between Java and Javascript? Well they are both very similar, but it depends on what you are using them for and what your preference is when it comes to programming.

First, java is an Object Oriented Programming or (OOP) created by James Gosling of Sun Microsystems. Javascript is a scripting language created by Netscape and used to be called Livescript. Both languages are distant cousins of each other, and both are OOP languages. Javascript is just less complex and contains simpler commands. It is overall easier for the average programmer to understand.

So what does OOP mean? It is actually relatively new term and means the sum of the parts of a program that make up the whole. An example would be like a car. First you have the engine and many other parts. Each part plays a vital role in creating the whole car. You need the engine, the seats, doors, etc. Without every part, the car will not work. Both java and javascript are like this, they have several parts that make the language and the program that is made, using either java or javascript. The most important part is that each part is in its own class, meaning you can't just mix stuff. Just like you can't have the steering wheel on the engine of the car, the same goes for java and javascript. There are a certain class of parts that go together to make the whole product. So the main point here is that when using these languages, you must use several classes of commands to create the whole program or operation that you're trying to accomplish.

Now for how they differ. The main difference is that java can stand on its own when programming. Where as javascript must be placed inside an HTML document to function. Java can create standalone applications, and is very powerful for applications. Javascript has to be fed into a text browser to function. Java is also used in mobile devices and does not require HTML to function. So it is really about opinion and preference. Java has slightly harder operations, and javascript just has extra steps to do some functions that java can do on its own. 

Javascript's main benefit is that it can be understood by the average human. It is generally more forgiving than java. Java requires every action to be denoted and spelled out correctly down to the smallest details. Javascript is geared to making web pages, while java is used mostly where it is really required for something to run. In conclusion, both languages are great for making a computer do some very cool actions. Both will create awesome web pages, both can also offer the user the opportunity to interact with a webpage. But java and javascript were by no means created equally. Just like all the other languages for programming, each language was made with a certain special skill that another language can't do as well. So it all comes down to preference and what language suits you best. 
References:
http://www.javatpoint.com/images/javascript/javascript_logo.png
http://www.htmlgoodies.com/beyond/javascript/article.php/3470971
http://blog.newrelic.com/wp-content/uploads/javalogo.png

Friday, September 2, 2016

The Development of Apple's iOS

iOS, originally called iPhone OS, is the operating system on Apple's handheld devices like the iPhone, iPad, etc. The first version of the operating system was released in January 9, 2007.

iOS was programmed using C, C++, Objective-C, and Swift, all programming languages. C is a general purpose programming language and is used for supporting the Unix System, which is similar to the system used in iOS, called Unix-like. Unix-like behaves like the Unix System except it is not within the confines of the unix system standard or specific specifications required to be classified as a unix system.

C++ is also a general purpose language and was made with a bias toward system programming and embedded, both used to control machines. C++ is known for its performance, efficiency, and flexibility of use. Which makes sense that iOS would use these two languages to keep its simplicity, a reason why iOS is smooth and barely lags as apposed to the Android operating system which can suffer from lag from time to time. Android utilizes Java. Java can be a bit more complicated when it comes to operating systems.


Since iOS uses the specific language listed earlier, it means that only certain developers can create apps and submit them to the app store. These developers are required to use these languages because iOS is only compatible with these specific languages. The operating system also requires apps being developed to be compatible with the hardware of Apple devices so that performance is not  compromised. Some third-party attempts have been made to try and utilize Java, but due to apple's strict restrictions, these attempts of creating apps have not been successful to make it to the app store. Overall iOS is a simple operating system that is complex on the inside, we dot usually realize that there are very complex operations that take place in order for us to use our Apple devices. 

References: 
http://images.slideplayer.com/25/7954067/slides/slide_4.jpg
https://en.wikipedia.org/wiki/IOS
https://en.wikipedia.org/wiki/C%2B%2B
https://en.wikipedia.org/wiki/Unix-like

Thursday, September 1, 2016

Programming In the Gaming Industry


Video Game Programming, a subset of game development, is the development of gaming software. This profession requires some substancial skill and knowledge in simulation, computer graphicsartificial intelligencephysicsaudio programming, and input.

When creating games, programmers take into account drafting a prototype and then testing several versions before actually agreeing on an idea and beginning to program the game. The tools required for programming games is where computer science plays a major role in the creation of every game.

First, like any software, programs for game development are usually generated from what is called a source code. These programs are generated from the source code to the actual program, the actual program is called the executable, and this process is accomplished through a compiler. While source code can be created with any text editor, some game programmers use an integrated development environment, known for short as an IDE, the IDE used is based on what platform is desired.

In order for this process to go smoothly, most companies spend thousands of dollars on powerful hardware, even utilizing multiple development systems and multiple monitors to ensure the programmer is able to accomplish several tasks with comfort.


The languages used in programming a game once the layout has been finalized are based on language familiarity, target platforms, speed requirements, and the language the game engines, libraries or APIs being utilized.

Lastly, APIs and libraries are extremely resourceful when it comes to programming a game. In todays world, there are APIs and libraries that can handle graphics rendering, sound processing, input, and sometimes even handling Artificial Intelligence tasks such as pathfinding. The libraries used are solely based on the target platform and what they are looking for in the operations of the game. So a library available for a Playstation console may not be the same for Microsoft Windows, and this works vice versa also.

So overall, programming in the gaming industry can be fun, but it is also very hard and requires time and dedication. Most games take up to three years to fully develop and release. So these individuals are highly intelligent and super patient. Without programming, we would not have games. So we can thank computer science for creating such a vast amount of methods for people to entertain themselves and others.





References:
https://en.wikipedia.org/wiki/Game_programming
http://yetanothergameprogrammingblog.blogspot.com