TomLikesRobots🤖
@TomLikesRobots
VFX AI guy. Artist/Photographer and former Web Dev. Personal experiments with Generative AI.
So much potential with this combination. I really think it would be possible to make a cosy animation people would watch. Midjourney V7 Runway Gen-4 Suno AI
Wan2.2 is now natively supported in ComfyUI on Day 0! 🔹 A next-gen video model with MoE (Mixture of Experts) architecture with dual noise experts, under Apache 2.0 license! - Cinematic-level Aesthetic Control - Large-scale Complex Motion - Precise Semantic Compliance 📚…
With no ML background + just LLM help, @EarthstormAI trained a colour pop model from scratch The resultant model is 4,000 times (!) smaller than Flux Kontext + LoRA is to get similar results on such a task.
I'm seeing this video circulating as AI generated when it was posted on Tiktok by the model last month. It is hard to tell. Was it posted in good faith or is this a new type of false hype? How long before it's impossible to tell? Not long I'd guess.
NOT 1 person will believe me This video is 100% AI generated You can now show your products with more interactive videos AI is truly getting scary RT + comment “AI” and I’ll show you how (must follow for dm)
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing…
IT'S FINALLY HERE! 🔥 Magnific Precision 🔥 First we created the first Creative Upscaler. Now, we reimagine the new world standard for non-creative upscales! Perfect for photographers and creatives: just more resolution/detail without unwanted changes! Info + tutorials 🧵👇
Beyond thrilled to officially roll out "Generate in Parts" on app.scenario.com - a new standard for Image-to-3D workflows! Say goodbye to monolithic Al meshes, and hello to clean, modular 3D assets you can animate, edit, or 3D print. Let's dive in with #PartCrafter👇!
Some news: We're building the next big thing — the first-ever AI-only social video app, built on a highly expressive human video model. Over the past few weeks, we’ve been testing it in private beta. Now, we’re opening early access: download the iOS app to join the waitlist, or…
Testing how @runwayml Act Two handles motion transfer on input video. The Veo 3 prompt handles just the scene, blocking, and a few sounds, while Act Two lets me fine tune the timing and delivery of the voice. The body movement is from Veo 3 and the face was from an iphone video…
Storyboarding and filming tips. A massive post with 50+ tutorials! Made by me.
🔥 We have a killer. Well, YOU have a killer! Soon: Magnific Precision. Coming to Freepik & Magnific.
"[video game] as a community theater production" may be one of the most delightful Veo 3 Fast prompts Please enjoy, in order: GTA, Pokemon, Mario Kart, The Witcher 3, Stardew Valley, Tetris, Mortal Kombat, The Sims, & Death Stranding(!) Yes, the whole prompt was the one above.
【百の屑】-AI Generated MV- Midjourneyで、suno曲のMV作りました! (百鬼夜行の世界観とMidjourneyの相性がバツグンです!) フル版も見てもらえると嬉しいです!▶️youtu.be/SprGbmvxqfk #SunoAI #midjourney #nijijourney #AI動画
VOID → text2img: Midjourney V7 → img2vid: @LumaLabsAI Using Ray2 → Edit: @capcutapp → Audio: Luma Audio