Launch HN: Eggnog (YC W24) – AI videos with consistent characters

Hi HN, we’re Jitesh and Sam, and we’re building Eggnog: https://www.eggnog.ai, an AI video platform where your characters look the same across every scene. Eggnog lets you create characters, use them in scene generations, and share them with others for remixing. Here are some videos made with Eggnog:

Fanfiction: https://www.reddit.com/r/harrypotterfanfiction/comments/1bn6... (includes some sounds from outside Eggnog) Comedy: https://x.com/jitsvm/status/1771609353725919316?s=20 Post-apocalyptic vibes: https://x.com/saucebook/status/1771212617601659279?s=20

We got into making funny AI videos over the last year, but felt annoyed that the characters always looked different in each scene. It made it harder to make cool videos and for our friends to understand the plot.

Diffusion models, like those that make AI videos, start with random noise, then add detail. So the little things that make a character recognizable will almost always come out looking different across generations, no matter how many tricks you add into the prompt. For instance, if you want your wizard to have a red hat, it might be crooked in one generation and straight in the next.

Eggnog allows users to make consistent characters by using Low-Rank Adaption (LoRA). LoRA takes in a set of images of a character in different scenarios and uses those images to teach the model that character as a concept. We do this by taking a single prompt that a user writes for a character (e.g., an ancient Greek soldier with dark hair and bushy beard) and turning it into a training set of images of the character in different poses, shot from different angles. Once the character is trained into the model, the user can then invoke that character concept in the prompt and get consistent generations about 80% of the time.

There is still a lot of room to make Eggnog generations more consistent and controllable. Generations sometimes come out with the wrong gender, missing key details from the costume, or fail in a long tail of other ways. We also often struggle to control the character’s exact body movement. We’re planning to address these cases with more optimization of the prompt that invokes the character concept and by using new open source video models that bake in 3D representations of humans.

The other fun thing about making characters with Eggnog is that you can share them. We already made one San Francisco “American Psycho” video that got over 100k views on Twitter (https://x.com/jitsvm/status/1766987382916894966?s=20). Then we expanded on the SF universe by making another video with the same character and a new friend for him (https://x.com/SamuelMGPlank/status/1767405784986718406?s=20). Eventually, you’ll be able to create and remix all the components of a good video—the characters, the costumes, the sets, and the sounds will all be part of a library of assets built up by the Eggnog community.

Eggnog is free to use, and you can try it out in the playground: https://www.eggnog.ai/playground. If you’re looking for some inspiration, you can try using the character “herm” waving a glowing wand or the character “lep” walking down a Dublin street. We’ll make money eventually by showing ads to viewers who come to Eggnog to watch AI videos.

We’re really excited to see all the fun videos and characters people are making with Eggnog, and looking forward to hearing what you all think!


Comments URL: https://news.ycombinator.com/item?id=39853474

Points: 28

# Comments: 16

https://news.ycombinator.com/item?id=39853474

Vytvořeno 1mo | 28. 3. 2024 18:50:15


Chcete-li přidat komentář, přihlaste se