A team of researchers has developed a realistic simulator capable of creating highly realistic environments that can be used to train self-driving vehicles. Scientists at the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) have released the VISTA 2.0 engine in an open-source format so other researchers can teach their self-driving cars how to operate in real-world scenarios, independent of real-world conditions. Data set limitations.
VISTA 2.0, the simulation engine developed by researchers at CSAIL, isnR17;t the first hyper-realistic driving simulation trainer for AI. “Today, only companies have software that simulates the type and capabilities of a VISTA 2.0-like environment, and that software is proprietary,” said Daniela Rus, MIT professor and director of CSAIL.
Rus added that with the release of VISTA 2.0, other researchers will finally have access to powerful new tools for researching and developing autonomous vehicles. But unlike other similar models, VISTA 2.0 has a unique advantage — itR17;s built with real-world data while still being photoreal.
The team of scientists used the foundations of their previous engine, VISTA, and used the data available to them to draw realistic simulations. This allows them to enjoy the benefits of real data points, while also creating photorealistic simulations for more complex training.
It also helps the AV AI to train in various complex situations, such as overtaking, following, negotiating and multi-agent scenarios. All of this is done in real time in a realistic environment. Hard work does show immediate results. AVs trained with VISTA 2.0 are more powerful than those trained on previous models using only real-world data.