Rich Klein
@RichKleinAI
Product manager, parent, children's book author, and webdev. ⭐ Runway: CPP, GEN:48 Finalist, community mod, CPP: Runway, Luma, Pika, Higgsfield, HeyGen.
A short sci-fi scene with sound made for the @GoogleLabs daily theme "Warped" using Veo 3. JSON prompt with timeline sequence below. Thanks to @IamEmily2050, @techhalla, and @icreatelife for sharing their prompt templates.🙏 I'm learning so much this week.
Creator Tip: If you’re juggling a few different generative AI subscription services with monthly credit expirations, try keeping a spreadsheet of the total monthly credits and renewal dates for each service then align those with project schedules. Each tool has its own…
A summer rain storm just passed through! 🌧️ A great chance to test Midjourney's new loop option.
Very excited for this! Runway References changed the way we edit images, now Aleph will change the way we edit video! 🚀
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing…
This looks both scary and insane! 🤯
Install the new Higgsfield Browser Extension. Hover any image on the web, hit Recreate. Same pose, lighting, outfit: but YOU are the star. Check out the instructions: higgsfield.ai/steal-chrome-e…
Watch, if you need a good laugh! 🤣
⭐️New comedic short film release⭐️ “The Interdimensional Council of Luminaries - Episode 1: The Human Problem” 🔊On! Watch til the end 😂 Please let me know what you think. Would really appreciate the likes and shares as the algo has been demotivating lately :/
AI Mode in Google Search: A Game Changer! 🔥 I'll be honest, I was a bit skeptical about AI Mode at first. I used Perplexity and ChatGPT web, but always switched back to the comfort of the blazing fast Google Search. After using AI Mode for just a few days, I can't imagine…
Making games with generative AI tools is real! I've seen a lot of examples, but I never tried myself with my own characters. There's still coding involved, but considerably less. Most of the work is preparing the assets. Here's the workflow I used for this proof-of-concept web…
It's getting so real! 👀
Introducing Higgsfield UGC Builder 🧩 Total scene control in a single interface. Generate a full cinematic video, no editing needed. Retweet & reply 'Build' to get a full guide in your DMs. You're the director now →
Good night everyone! Remember to always swing for the fences! 🔉
And it came with instructions: help.runwayml.com/hc/en-us/artic…
Introducing Act-Two, our next-generation motion capture model with major improvements in generation quality and support for head, face, body and hand tracking. Act-Two only requires a driving performance video and reference character. Available now to all our Enterprise…
Runway Act-Two is soooo much fun! 🎬 The ability to add expressive, lip-synced dialogue to generated and existing video clips is a big win for creators. 💪 I'm finally home and started testing some of my characters with a Shakespeare monologue. The expressiveness of Act-Two…
Used my lunch break to have some fun with the new @higgsfield_ai Soul ID custom character model. I used multiple images of a character as input, then used my new character model with Soul and prompted for different outfits and locations. I used the images as inputs to the Video…