Visual recognition may not be very exciting to the average person, but the technology behind it, known as deep learning, is revolutionizing the world, and its possible applications could be far reaching. If you look at how Amazon can accurately suggest products you may be shopping for, how self-driving cars know the difference between a garbage can and a squirrel, and how Facebook and iPhones can accurately recognize your face, you can get an idea of the how this technology is already being used.
Imagine you're checking out in a supermarket, but rather than scanning a bar code the register uses different data — words and images — to determine everything from the price and size of the item to the nutritional content and more, even if the machine has never seen the item before. That's deep learning technology.
When a human brain learns new information it forms connections between neurons called synapses, and can be connected to thousands of other neurons. These neurons pass electric signals to trillions of other synaptic connections which forms a connectome — essentially a map of the brain's neural connections. Deep learning is a assimilation of a connectome for machines, with information being constantly added.
Deep learning technology learns by rearranging connections after each experience, just as a human brain does. Computers are already very good at detecting patterns, but deep learning technology makes it possible for machines to learn from data outside of pre-programmed coding. An example would be a computer chess game; you can program a machine to use algorithms and that would be enough to beat most humans — without the algorithms it would be almost impossible to to account for every possibility during the game — but suppose you have an algorithm that can not only sample millions of chess games at once, but can begin making its own algorithms going forward based on what the machine has "learned" on its own.
Since deep learning isn't bound by traditional written code, there is no logical set of parameters or limitations. In other words, deep learning allows for solving problems normally normally outside the capacity of the human thought process. Layers of data are the building blocks of deep learning. Each layer receives and input, translates it and passes the values as output to the next layer. Layers filter common patterns that distinguishes between features such as the difference between a nose and a mouth making it easy to find and categorize features. As more information is added, the complexity grows, providing details and abstract relationships within the data and allowing the machine to learn how thousands of features interact with each other on a non-linear level.
Using visual recognition as an example application, each of these data layers deals with different aspects of an image. One layer may process the raw image, including colors, etc.; another combines more abstract features like edges, shadows and shapes; and another puts the information together using billions of photo reference points to come up with data that can distinguish two very similar images from one another. This how Google, for example, can tell the difference between similar-looking mushrooms, a crow from a raven, a crocodile from an alligator or a Cardigan Corgi and a Pembroke Corgi in millions of videos and images.
Deep learning technology is used in a myriad of industries already, but nowhere has it had a larger impact than in healthcare. In the past, compound tomography (CT), magnetic resonance imaging (MRI), ultra sound and X-ray, has used highly paid experts such as pathologist, and radiologist and physicians to interpret medical images. Professionals made decisions decidedly from a subjective point of view such as image perception, detection, localization and characterization. Deep learning technology has taken away most of the subjectivity (without getting rid of those highly paid positions, yet.)
This technology has been able to diagnose chest x-ray images for presence or absence of tuberculosis with an impressive 96% accuracy. And deep learning researchers also detected the spread of cancer into lymph node tissue from images with better accuracy than industry experts. Additionally, deep learning can be used to detect diabetes from images of retinas.
As for societal benefits, providing discovery from huge data sets, repurposing known drugs for new diseases, and even further analysis of satellite images can also be attributed to deep learning technology, just to name a few — but it's not all good news.
Deep learning machines are speaking a different language that even coders can’t understand. And because there is no traditional programming involved, coders are basically training the computer to learn on its own. Once the AI is off and running it's not really clear to anyone how the machine is actually completing its tasks. It's like computer version of Frankenstein, subject to very little control once it's brought to life — just ask the company developing the technology.
Our technological landscape continues to change, and deep learning is at the front lines. For all the excitement and possibilities deep learning may afford, we still must nurture its development carefully.
Thomas Russell is a high school information technology teacher and retired Army Signal Corps soldier. He is the founder of SEMtech (Student Engagement and Mentoring in Technology) and an Advisory Board Member of Educating Children of Color. His hobbies include writing, photography and hiking. Contact Thomas via Russell’s Room on Facebook, or email at email@example.com, and his photography at thomasholtrussell.zenfolio.com.