Helm.ai launches VidGen-1 generative video model for autonomous vehicles, robots

2 months ago 41

Listen to this article

Voiced by Amazon Polly

Helm.ai is developing generative AI for training autonomous vehicles.

VidGen-1 applies generative AI to produce video sequences for training autonomous systems. Source: Helm.ai

Training machine learning models for self-driving vehicles and mobile robots is often labor-intensive because humans must annotate a vast number of images and supervise and validate the resulting behaviors. Helm.ai said its approach to artificial intelligence is different. The Redwood City, Calif.-based company last month launched VidGen-1, a generative AI model that it said produces realistic video sequences of driving scenes.

“Combining our Deep Teaching technology, which we’ve been developing for years, with additional in-house innovation on generative DNN [deep neural network] architectures results in a highly effective and scalable method for producing realistic AI-generated videos,” stated Vladislav Voroninski, co-founder and CEO of Helm.ai.

“Generative AI helps with scalability and tasks for which there isn’t one objective answer,” he told The Robot Report. “It’s non-deterministic, looking at a distribution of possibilities, which is important for resolving corner cases where a conventional supervised-learning approach wouldn’t work. The ability to annotate data doesn’t come into play with VidGen-1.”


SITE AD for the 2024 RoboBusiness registration now open.
Register now.


Helm.ai bets on unsupervised learning

Founded in 2016, Helm.ai is developing AI for advanced driver-assist systems (ADAS), Level 4 autonomous vehicles, and autonomous mobile robots (AMRs). The company previously announced GenSim-1 for AI-generated and labeled images of vehicles, pedestrians, and road environments for both predictive tasks and simulation.

“We bet on unsupervised learning with the world’s first foundation model for segmentation,” Voroninski said. “We’re now building a model for high-end assistive driving, and that framework should work regardless of whether the product requires Level 2 or Level 4 autonomy. It’s the same workflow.”

Helm.ai said VidGen-1 allows it to cost-effectively train its model on thousands of hours of driving footage. This in turn allows simulations to mimic human driving behaviors across scenarios, geographies, weather conditions, and complex traffic dynamics, it said.

“It’s a more efficient way of training large-scale models,” said Voroninski. “VidGen-1 is able to produce highly realistic video without spending an exorbitant amount of money on compute.”

How can generative AI models be rated? “There are fidelity metrics that can tell how well a model approximates a target distribution,” Voroninski replied. “We have a large collection of videos and data from the real world and have a model producing data from the same distribution for validation.”

He compared VidGen-1 to large language models (LLMs).

“Predicting the next frame in a video is similar to predicting the next word in a sentence but much more high-dimensional,” added Voroninski. “Generating realistic video sequences of a driving scene represents the most advanced form of prediction for autonomous driving, as it entails accurately modeling the appearance of the real world and includes both intent prediction and path planning as implicit sub-tasks at the highest level of the stack. This capability is crucial for autonomous driving because, fundamentally, driving is about predicting what will happen next.”

VidGen-1 could apply to other domains

“Tesla may be doing a lot internally on the AI side, but many other automotive OEMs are just ramping up,” said Voroninski. “Our customers for VidGen-1 are these OEMs, and this technology could help them be more competitive in the software they develop to sell in consumer cars, trucks, and other autonomous vehicles.”

Helm.ai said its generative AI techniques offer high accuracy and scalability with a low computational profile. Because VidGen-1 supports rapid generation of assets in simulation with realistic behaviors, it can help close the simulation-to-reality or “sim2real” gap, asserted Helm.ai.

Voroninski added that Helm.ai’s model can apply to lower levels of the technology stack, not just for generating video for simulation. It could be used in AMRs, autonomous mining vehicles, and drones, he said.

“Generative AI and generative simulation will be a huge market,” said Voroninski. “Helm.ai is well-positioned to help automakers reduce development time and cost while meeting production requirements.”

Read Entire Article