MIT, Microsoft create prototype that taps humans to pinpoint AI vulnerabilities
Massachusetts Institute of Technology (MIT) and Microsoft researchers have developed a model that recognises flaws within artificial intelligence (AI) systems, said a report in MIT News.
The most immediate applications of the model could be in driverless vehicles and autonomous robots, the report said.
For driverless cars, extensive simulations are conducted to prepare them for the road. However, the AI system might make a wrong assessment, simply because it did not have the right sensors to differentiate between scenarios.
However, the model developed adds a human element to the training schedules, where the human analyses and pinpoints any problems.
Simulation data and human data are then entered into a machine learning process.
The study said that the researchers first validated their prototype in video games, with a human being constantly correcting the problems encountered by the AI character. The next step would be to incorporate the model into existing training schedules for autonomous cars.
“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting and they could make mistakes, such as getting into accidents. The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.” said Ramya Ramakrishnan, author of the study, who spoke to MIT News.
The researchers said that although traditional training methods do provide human feedback, they are only used to update the actions of the system and not to identify blind spots per se.
The paper said that the initial steps of the module include creating a policy that maps errors during simulations. The system is then subjected to the real world, where humans recognise error signals in situations where the actions are unacceptable.