Upcoming Movies

Movies Of The Future Are Here Now With Meta Movie Gen

October 20, 20244 Mins Read


Don’t look now, but Meta-nee-Facebook has pioneered yet another leg of the journey toward immersive generated video, and the ability to create real footage out of very little at all.

This month, insiders are unveiling news on the research that Meta teams have done into video generation, personalized video, and generated audio, as well as precise video editing.

As Meta leaders point out in the relevant news release, this is the “third wave” of innovation on this track – with the ‘Make-a-Scene’ models and Llama Image paving the way, respectively.

“Similar to previous generations, we anticipate these models enabling various new products that could accelerate creativity,” spokespersons write.

They also mention that the technology should be assistive, rather than autonomous:

“While there are many exciting use cases for these foundation models, it’s important to note that generative AI isn’t a replacement for the work of artists and animators,” the authors write. “We’re sharing this research because we believe in the power of this technology to help people express themselves in new ways and to provide opportunities to people who might not otherwise have them. Our hope is that perhaps one day in the future, everyone will have the opportunity to bring their artistic visions to life and create high-definition videos and audio using Movie Gen.”

Meta Movie Gen: Made for Deepfakes?

The makers of this new technology explain its personalized use this way:

“We take as input a person’s image and combine it with a text prompt to generate a video that contains the reference person and rich visual details informed by the text prompt. Our model achieves state-of-the-art results when it comes to creating personalized videos that preserve human identity and motion.”

You can imagine how things like pose position libraries, and speech audio generators can be used to create a digital ‘you’ that walks and talks just like you do. The obvious question is how to ensure that this gets used for something productive, and not for ushering in the complete chaos of deepfakery where you can’t trust your eyes when watching streaming media.

To the extent that they can eliminate the uncanny valley, we’re facing big changes in the way we live our lives. The Meta people have this to say:

“With creativity and self-expression taking charge, the possibilities are infinite.”

That’s true – but it may end up being somewhat of a double-edged sword.

Generating with the Stable Diffusion Model

Looking at this news, I was inspired to write more about how our MIT CSAIL lab Director Daniela Rus describes AI creating novel images, and by extension, video and other products.

What programs essentially do, she indicated, is introduce noise into a picture, until the result is fully abstracted. Then the same program reverse engineers that process to create a new picture, minus the noise. That means the new picture is going to be constructed according to the prompts, with attribute similarities to the original pictures or training data, but with its own novel design.

It’s this ability to create ‘doubles that are not twins’ that lays at the heart of this technology, and that’s going to be extremely confusing for people who have spent centuries looking into mirrors. If the ‘you’ in the mirror moves when you don’t, you immediately get the eerie vertigo of unreality setting in. Is that what this is going to be like?

No Public Release

With all of that in mind, the company is not simply putting Meta Movie Gen out there to be used by the average person. That could be extremely problematic, given how many people have their own malicious or downright strange goals and objectives for this technology.

In general, we’re going to need to be pretty circumspect about how this is used – deliberate about how it gets put out there, and thoughtful about its impact on our societies. The new research shows that we’re closer to these capabilities than many people would’ve thought even just a couple of years ago. We have to figure out how to use the power of personalized video – and the capability of deepfakes – for good!



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

SUBSCRIBE TO OUR NEWSLETTER

Get our latest downloads and information first.
Complete the form below to subscribe to our weekly newsletter.


No, thank you. I do not want.
100% secure your website.