Artificial intelligence, machines and software with the ability to think for themselves, can be used for a variety of applications ranging from military technology to everyday services like automated telephone systems. However, none of the systems that currently exist exhibits learning abilities that would match the human intelligence. Recently, scientists have wondered whether an artificial agent could be given a tiny bit of human-like intelligence by modeling the algorithm on aspects of the primate neural system.
Using a bio-inspired system architecture, scientists have created a single algorithm that is actually able to develop problem-solving skills when presented with challenges that can stump some humans. And then they immediately put it to use learning a set of classic video games.
Scientists developed the novel agent (they called it the Deep Q-network), one that combined reinforcement learning with what's termed a "deep convolutional network," a layered system of artificial neural networks. Deep-Q is able to understand spatial relationships between different objects in an image, such as distance from one another, in such a sophisticated way that it can actually re-envision the scene from a different viewpoint. This type of system was inspired by early work done on the visual cortex.
Scientists considered tasks in which Deep-Q was able to interact with the environment through a sequence of observations, actions, and rewards, with an ultimate goal of interacting in a way to maximize reward. Reinforcement learning systems sound like a simple approach to developing artificial intelligence—after all, we have all seen that small children are able to learn from their mistakes. Yet when it comes to designing artificial intelligence, it is much trickier to ensure all the components necessary for this type of learning are actually included. As a result, artificial reinforcement learning systems are usually quite unstable.
Here, these scientists addressed previous instability issues in creating Deep-Q. One important mechanism that they specifically added to Deep-Q was “experience replay.” This element allows the system to store visual information about experiences and transitions much like our memory works. For example, if a small child leaves home to go to a playground, he will still remember what home looks like at the playground. If he is running and he trips over a tree root, he will remember that bad outcome and try to avoid tree roots in the future.
Using these abilities, Deep-Q is able to perform reinforcement learning, using rewards to continuously establish visual relationships between objects and actions within the convolution network. Over time, it identifies visual aspects of the environment that would promote good outcomes.
This bio-inspired approach is based on evidence that rewards during perceptual learning may influence the way images and sequences of events or resulting outcomes are processed within the primate visual cortex. Additionally, evidence suggests that in the mammalian brain, the hippocampus may actually support the physical realization of the processes involved in the “experience replay” algorithm.
Read More
Comments
Post a Comment