Google's DeepMind division has devised a new means to test an artificial intelligence's (A.I.'s) cognitive ability based on classical laboratory psychological experiments that have been used on humans. In fact, the tests are 3D game world representations of those same experiments that have been remade to allow a virtual participant to take part. Of course, since A.I. doesn't navigate the world in the same way people do, the company has creatively engineered its own simulated psychology laboratory called Psychlab. Where humans would interact with the experiments via mouse input, the A.I. inhabits a virtual self and uses its directional gaze to select items on a screen in a virtual setting. As with the real-world version of the psychological labs, the A.I.'s responses and interactions are gauged across several tests. Those include continuous recognition of a growing list of items, cued recall tasks, change detections, visual acuity, contrast sensitivity, glass pattern detection, visual searching and mapping, random dot motion discrimination, and the ability to track multiple objects over time.
DeepMind says that although testing of human cognition is still not perfected and the ideas of consciousness or sentience are still not well understood, these tests will help gauge the ability of a given A.I. Moreover, they will allow researchers and developers who are working with A.I. to effectively train them for specific tasks. Through the research, the company says it also contributes a study of a field of machine learning called reinforcement learning, which is associated with behavioral psychology. Primarily, the study concerns how deep learning agents - A.I. - should respond to stimuli in their virtual or real environment to maximize a pre-programmed or machine-learned notion of a reward. DeepMind's deep reinforcement learning agent, UNREAL, was concluded to learn more quickly about large target stimuli than smaller stimuli. That essentially shows how tools like Psychlab could have far-reaching uses for the creation of A.I. across a huge variety of uses. Specifically, the insight into how an A.I. thinks, generated with the tests, can then be used to bolster performance or alter coding to suit specific circumstances under which the machine learning will be used - giving developers and researchers a way to understand whether or not they will interact well with humans, as well as how complex their thought processes are.
Best of all, following its own study with Psychlab, the search giant's subsidiary has decided to make its tools open-source so that any A.I. researcher can test their machine learning agents. Whether or not this has a big impact on the various implementations of A.I. in the real-world remains to be seen. However, it's hard to imagine the experiments ultimately not taking the industry at least a few steps forward