Image Classification and the Problem of Overfeeding
Algorithms and artificial intelligence are always being developed to simplify the process of reading and interpreting data. Data is an infinite resource that requires the work of complex artificial intelligence systems known as neural networks.
What Are Neural Networks?
Recommendation systems make active use of neural networks and their ability to learn new things over time. They were designed to replicate natural cognitive abilities through a system of logic and reasoning.
Neural networks are made up of several layers that work together to properly assess and classify data. The various layers communicate with each other – fulfilling sets of variables – in an effort to provide an output; the output is the final layer and the neural networks definitive answer to the data it was asked to assess. The layers are capable of remembering data and they strive to create patterns and correlations based on the data it is fed.
Recommendation systems can benefit from these networks as they will allow them to analyze complex data patterns in an effort to provide useful recommendations that are likely to convert into a return on investment.
Neural Networks and Image Classification
In recent years, neural networks have been created to process data in innovative and complex ways. Image classification calls upon a neural network to spot specific attributes in an image. The network is fed millions of images in order to build a solid foundation of attributes and classifications. As the layers develop, they begin to master specific features and continue to develop a sophisticated understanding of high-level features.
Simplified, a basic identification would notice rough or smooth edges, the intermediate stage may detect shapes or larger components, and the final layer would tie together the attributes into a logical solution. While this process may work in theory, the results can vary and even the most complex algorithms can struggle to properly interpret data. In the end, overfeeding becomes an issue as the algorithm tries to tie together every element that it is asked to detect and process.
Google’s Take on Image Classification
Google conducted a series of tests that highlighted the problems with data overfeeding, or in their own words: the process of “inceptionism.” In short, inceptionism is the imagined result of an image classification system that is fed an image and interprets something new from the data it was asked to process.
The same problem occurs with recommendation systems when the system becomes too familiar with data and tries to complicate data and create unrealistic recommendations.
The Dog Knight
Google’s animal detection algorithm was asked to analyze a picture of a knight. The neural network specialized in detecting animals and had very little experience identifying pictures outside of that context. When it processed the picture of the knight, it saw colors and patterns that it recognized from the millions of animals it had previously analyzed. As the layers communicated, they visualized strange pictures of dog’s heads, noses, eyes, and created other odd patterns in the cloudy background. The neural network worked in general, but the process of overfeeding saw it complicate and misinterpret the image.
Abstract Cloud Visualizations
For the next set, an abstract image of clouds was fed into the system. The results were similar to the previous knight image. Instead of classifying the image as a set of clouds, the system overcomplicated the process and rendered various animals like the “admiral dog,” “pig-snail,” “camel-bird,” and “dog fish.”
“The results are intriguing-even a relatively simple neural network can be used to over-interpret an image, just like as children we enjoyed watching clouds and interpreting the random shapes. This network was trained mostly on images of animals, so naturally it tends to interpret shapes as animals. But because the data is stored at such a high abstraction, the results are an interesting remix of these learned features,” wrote Google on their official research blog.
The Imagined Arm
In this example, the neural network associated dumbbells with an arm lifting them. It had never seen a set of dumbbells without an arm, and thus the various classification layers constructed an entire arm to hold the dumbbells based on their understanding that it was a necessary component even when one did not exist in the original image.
The Self-Imagined Banana
The complexity of neural networks can even create images out of static noise. Google fed its image classification system a picture of millions of random pixels. The neural network’s output was a vague picture of banana. Why did this happen? As we continue to learn more about these complicated systems, we are also learning new ways to trick them into finding features that will push the system to identify an image in a specific way.
The Problem With Overfeeding
Neural networks possess infinite potential, but they will continue to struggle unless algorithms can find a way to address the problem of data overfeeding. The layers in a neural network must process the data and reach logical conclusions based on data patterns and learned attributes. However, a paradox presents itself: as layers become more sophisticated and capable of conceptualizing detailed features, they will also fall victim to overthinking these features, similar to what happened in the images above.
Neural networks are powerful tools that will greatly aid recommendation systems as they allow them to build upon a foundation of recommendations. If the problem of overfeeding can be solved, recommendation systems will become increasingly more accurate in their outputs and that will greatly improve the user experience.
#Image #Classification #Problem #Overfeeding
Will be pleased to have you visit my pages on social networking .
Facebook page here.
Twitter account is here.0