Genmo Mochi 1 Preview
๐ Note: Thumbnail was generated using Mochi-1-Preview model with help of ComfyUI;
This article was created with help ofllama 3.1:8b
using Ollama;
๐ฅ Don't have time to read? - Watch the video we created:
Hey there, fellow tech enthusiasts! ๐ I'm Mikhail, and welcome to my article on the latest developments in AI-generated videos. Today, we're going to dive into the world of Mochi 1, an open-source Text2Video model that's taking the industry by storm ๐ฅ.
What You Need to Know About Mochi 1 ๐ค
Mochi 1 is a Text2Video model released under the Apache 2 license. This means that anyone can use the output and derivatives of this AI without paying any monthly subscription or royalty fees ๐ธ. Just download and use it! However, there's a catch (there always is ๐). To run this model, you need at least four H100 NVIDIA GPU's, which costs around $25,000 each ๐คฏ.
ComfyUI Mochi-1 Module ๐
But here's the thing - someone has taken it upon themselves to create a quantized version of the model and force it to run inside of the ComfyUI on an ordinary RTX 4090. This means that you can now generate decent-quality videos in about half an hour with 50 steps or very good quality with 200 steps in about 2 hours ๐.
You can find more information about this amazing achievement on ComfyUI Mochi-1 module. This is a huge step forward for the industry, and we're excited to see where it takes us ๐.
The Quality: A Game-Changer? ๐ค
But what about quality? We took exactly the same prompt from the long-promised SORA AI and here's the result ๐คฏ. As you can see, Mochi 1 is capable of producing high-quality videos that are almost indistinguishable from real-life footage ๐ฅ.
What's Next? ๐ค
Looking further ahead, Genmo promises to release a full version of Mochi 1 by the end of this year ๐. This will include a 720p HD version with enhanced fidelity and even smoother motion ๐ฅ. They're also working on image-to-video capabilities, which is going to be huge for the industry ๐.
The Future: Integration and Innovation ๐ก
We're already seeing more companies integrating Mochi 1 into their products ๐. Heck, we're planning to add it to our new project called KraftStream ๐คฏ. This is just the beginning of a new era in AI-generated videos, and we can't wait to see what's next ๐.
Conclusion: The Future is Now! ๐
In conclusion, Mochi 1 is an open-source Text2Video model that's changing the game for the industry ๐คฏ. With its high-quality output and affordable price point, it's a must-have for anyone looking to get into AI-generated videos ๐ฅ.
So, what are you waiting for? Try it out yourself and see the magic of Mochi 1 in action ๐ฎ. And if you're interested in learning more about this technology, be sure to check out our blog at https://blog.wiregate.io ๐.
Don't forget to like and subscribe for more updates on the latest developments in AI-generated videos! ๐
Prompt ๐
```
What you are looking at right now is not what you think!
This moment never happened in real life.
This scene is completely artificial.
Want to learn more?
Stay with me and Iโll show โHow deep the rabbit hole goes?โ.
Hi there! ๐ My name is Niki.
Today is October 30, 2024, and you are watching Breaking Brains.
About a week ago a startup company Genmo released an open source Text2Video model called mochi-1-preview.
And as you can see the first impression results are not goodโฆ ๐ for 3D Animators, Movie Actors, and Movie Editors ๐
I would strongly recommend them to start looking for a new job as this little AI rocks! ๐ค
Let's dig deeper to understand why this model is so important and what makes it so special.
Most probably youโve already seen some AI-generated videos and now thinking: โWhatโs the fuss about? nothing new!โ
First and foremost Mochi 1 is released under Apache 2 license.
This means that One can use the output and derivatives of this little AI without paying any monthly subscription or royalty fees.
Just download and use it!
Hi Stability AI!
However, there is a catch.
There is always a catch.
To run this model you need at least four H100 NVIDIA GPUโs ~ about 25 grand each.
Genmoโs Mochi 1 model allows One to generate videos up to 5 seconds with 480p resolution.
Thanks to this guy โ๏ธyou donโt have to sell your kidney to try Mochi 1.
So what he did is literally brain-breaking.
On the same day that mochi-1-preview was released, he created a quantized version of the model and forced it to run inside of the ComfyUI on an ordinary RTX 4090.
Which means it now takes about half an hour to generate a decent quality of Alien versus Mario dream with only 50 steps.
Or very good quality with 200 steps in about 2 hours.
Anyway... Performance is great!
What about quality?
We took exactly the same prompt from the long-promised SORA AI and here is the result.
This meme will explain everything.
By the way, almost everything that you saw was also generated by this little AI.
Whatโs next?
Looking further these guys are promising to release a full version of Mochi 1 by the end of this year.
It will include a 720p HD version with enhanced fidelity and even smoother motion.
By the way, they are working on image-to-video capabilities.
That's one small step for AI, one giant leap for mankind.
We now will see that more and more companies are integrating this little AI into their products.
Heckโฆ We are already planning to add it to our new project called KraftStream.
You donโt have to believe us.
Just try it yourself.
Like the content we create? - check out our blog.
Link the description.
Donโt forget to Like and Subscribe!
See you in the next one!
```
Given above script for the video write a LinkedIn article at least 500 words.
Include a lot of emoji's in the text to make it more rich.
You must write in Markdown syntax.
You must use `##` for marking the paragraphs.
You must use brackets and parenthesis for the links.
Author of the text is Mikhail Shevtsov.
Link to the blog: https://blog.wiregate.io
You must include following links in the article:
- https://github.com/kijai/ComfyUI-MochiWrapper - ComfyUI Mochi-1 module