Start your day with intelligence. Get The OODA Daily Pulse.

Stability AI steps into a new gen AI dimension with Stable Video 4D

Stability AI is expanding its growing roster of generative AI models, quite literally adding a new dimension with the debut of Stable Video 4D. While there is a growing set of gen AI tools for video generation, including OpenAI’s Sora, Runway, Haiper and Luma AI among others, Stable Video 4D is something a bit different. Stable Video 4D builds on the foundation of Stability AI’s existing Stable Video Diffusion model, which converts images into videos. The new model takes this concept further by accepting video input and generating multiple novel-view videos from 8 different perspectives. “We see Stable Video 4D being used in movie production, gaming, AR/VR, and other use cases where there is a need to view dynamically moving 3D objects from arbitrary camera angles,” Varun Jampani, team lead, 3D Research at Stability AI told VentureBeat. This isn’t Stability AI’s first foray beyond the flat world of 2D space. In March, Stable Video 3D was announced, enabling users to generate short 3D video from an image or text prompt. Stable Video 4D is going a significant step further. While the concept of 3D, that is 3 dimensions, is commonly understood as a type of image or video with depth, 4D isn’t perhaps as universally understood. Jampani explained that the four dimensions include width (x), height (y), depth (z) and time (t). That means Stable Video 4D is able to view a moving 3D object from various camera angles as well as at different timestamps. “The key aspects that enabled Stable Video 4D are that we combined the strengths of our previously-released Stable Video Diffusion and Stable Video 3D models, and fine-tuned it with a carefully curated dynamic 3D object dataset,” Jampani explained.

Full report : AI unveils Stable Video 4D, a model based on the existing Stable Video Diffusion model to take video input and generate videos from eight perspectives.

Tagged: AI AI Innovation