DAViD
@DAViD_aka_DAViD
Top 3 favorite things... People, Creative Environments & Research
how many worlds exist within a video?
Another incredibly practical use case: wardrobe, makeup, hair, and styling modifications. Aleph can modify and transform existing parts of your video while keeping everything else consistent.
Btw we’re hiring for FDC aka technical artists. Usually you’re a 1/1 - highly creative, tastemaker, product oriented, and obsessed with runway.
.@runwayml CEO @c_valenzuelab, CEO on how AI is reshaping enterprise video. “We deploy creatives inside the companies. And so, you can think about it as kind of like what Palantir does with forward deployed engineers." "Most of what you can do with Runway is non-traditional.…
Aleph can handle complex motion and moving objects. The input video was in daylight, so I asked it to turn the lights off and set the juggling balls on fire.
Runway Aleph can do so many different video transformation tasks! These are only a few of them:
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing…
Aleph does a pretty good job at modifying specific parts of your video while retaining camera, motion and identity. Aleph is godmode.
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing…
Very excited to announce Runway Aleph. It is not only a big step forward in control and quality, but also creates a new paradigm for models that can solve many video tasks at once. The future is generalizable. Rolling out gradually over the next few days.
Introducing Runway Aleph, a new way to edit, transform and generate video. Aleph is a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation, with the ability to perform a wide range of edits on an input video such as adding, removing…
Please enjoy today's challenge prompt: Transition More info in @runwayml's Discord, have fun creating ❤️ discord.gg/runwayml
Learn how to create expressive character performances with Act-Two, our most advanced motion capture yet. Now supporting hand and upper body tracking as well as an even wider variety of characters and styles.
Today's challenge prompt is New York, New York Have fun y'all 🧡 More info in Discord
Act-Two Walkthrough Today at 1PM ET! Join @IXITimmyIXI live on @runwayml’s Discord for a beginner-friendly overview of Act-Two! We’ll cover the basics, break down the official guide, and explore some standout creations from the community. Come hang out, learn the ropes, and get…
🤯
Act-Two allows you to create highly expressive scenes entirely driven by the nuanced performances of your actors. The timing, delivery, body language and subtle expressions are all faithfully transposed from your driving performances to your generated characters. Learn more…
Act Two with costumes and props/accessories really takes things to the next level.