Helm.ai introduces VidGen-2 generative AI for enhanced video for autonomous driving

2 weeks ago 10

Listen to this article

Voiced by Amazon Polly

VidGen-2 can generate video at up to 30 fps, says Helm.ai.

VidGen-2 can generate multi-camera views and video up to 30 fps for training self-driving cars. Source: Helm.ai

Generative artificial intelligence could soon help self-driving cars with perception. Helm.ai today launched VidGen-2, its next-generation generative AI model for producing realistic driving video sequences.

VidGen-2 offers double the resolution of its predecessor, VidGen-1, improved realism at 30 frames per second, and multi-camera support with twice the resolution per camera. It provide automakers with a scalable and cost-effective solution for autonomous driving development and validation, claimed Helm.ai.

The Redwood City, Calif.-based company released VidGen-1 in July. It said at the time that its software could help developers of advanced driver-assist systems (ADAS), autonomous vehicles, and autonomous mobile robots (AMRs).

VidGen-2 increases video resolution

Trained on thousands of hours of diverse driving footage using NVIDIA H100 Tensor Core GPUs, VidGen-2 uses Helm.ai’s generative deep neural network (DNN) architectures and “Deep Teaching,” an unsupervised training method. It generates highly realistic video sequences at 696 x 696 resolution, double that of VidGen-1, with frame rates ranging from 5 to 30 fps.

The model also enhances video quality at 30 fps, delivering smoother and more detailed simulations. Videos can be generated by VidGen-2 without an input prompt or prompted by a single image or input video.

VidGen-2 also supports multi-camera views, generating footage from three cameras at 640 x 384 (VGA) resolution for each. The model ensures self-consistency across all camera perspectives, providing accurate simulation for various sensor configurations, said the company.


SITE AD for the 2024 RoboBusiness registration now open. Register now.


New models lead to better AI driving, says Helm.ai

VidGen-2 generates driving scene videos across multiple geographies, camera types, and vehicle perspectives, according to Helm.ai. The model not only produces highly realistic appearances and temporally consistent object motion, but it also learns and reproduces human-like driving behaviors, simulating the motions of the ego-vehicle and surrounding agents in accordance with traffic rules.

It creates a wide range of scenarios, including highway and urban driving, multiple vehicle types, pedestrians, cyclists, intersections, turns, weather conditions, and lighting variations. In multi-camera mode, the scenes are generated consistently across all perspectives.

“VidGen-2 gives automakers a significant scalability advantage over traditional non-AI simulators by enabling rapid asset generation and imbuing agents in simulations with sophisticated, real-life behaviors,” said Helm.ai. The company claimed that in addition to reducing development time and cost, its model closes the “sim-to-real” gap, offering a realistic and efficient way to broaden the scope of simulation-based training and validation.

“The latest enhancements in VidGen-2 are designed to meet the complex needs of automakers developing autonomous driving technologies,” stated Vladislav Voroninski, co-founder and CEO of Helm.ai.

“These advancements enable us to generate highly realistic driving scenarios while ensuring compatibility with a wide variety of automotive sensor stacks,” he added. “The improvements made in VidGen-2 will also support advancements in our other foundation models, accelerating future developments across autonomous driving and robotics automation.”

Founded in 2016, Helm.ai said it “reimagines AI software development to make scalable autonomous driving a reality.” Its offerings include deep neural networks for highway and urban driving, end-to-end autonomous systems, and development and validation tools powered by Deep Teaching and generative AI.

The company collaborates with global automakers on production-bound projects.

https://youtu.be/5xR7PekAQ7M

Read Entire Article