Meta’s Make-A-Video AI creates videos from text

Meta has unveiled “Make-A-Video,” a new AI system that turns text into high-quality video clips. Make-A-Video relies on Meta AI’s recent generative technology research and could offer new doors for artists. The system learns how the world flows from video without words.

As part of our dedication to open science, we’ll share a study paper and demo experience. Recently, Meta rolled out new Reels tools and an “Add Yours” sticker to Instagram and Facebook.

Make-A-Video: An AI system that generates videos from text

The development of tools that enable quick and simple content creation is one way that generative AI research is advancing creative expression. Make-A-Video can bring creativity to life and produce one-of-a-kind videos with vivid colours, characters, and settings with just a few words or lines of text. In addition, the system has the ability to take existing videos and produce new ones that are identical to them.

Make-A-Scene

Make-A-Video follows the release of Make-A-Scene earlier this year, which is a multimodal generative AI technique that gives users more control over the AI-generated content they create. With Make-A-Scene, Meta expanded on the idea of how anyone can produce photorealistic graphics and artwork fit for a children’s book out of words, lines of text, and freeform sketches or even doodles.

Regarding the new AI system, Meta, said:

We want to be thoughtful about how we build new generative AI systems like this. Make-A-Video uses publicly available datasets, which adds an extra level of transparency to the research. We are openly sharing this generative AI research and results with the community for their feedback, and will continue to use our responsible AI framework to refine and evolve our approach to this emerging technology.


Related Post