Dmytro Okhonko 🇺🇦
@DmytroOk
Sora @OpenAI, previously @Samaya_ai and @MetaAI
Working on Sora with such an incredible and talented team is truly the experience of a lifetime. I hope you love Sora as much as we loved creating it!
Fairseq includes support for sequence to sequence learning for speech and audio recognition tasks, faster exploration and prototyping of new research ideas while offering a clear path to production. bit.ly/2WfP85X
Proud to finally see an American video model on top! Sora was developed and trained in the United States. 🇺🇸🇺🇸🇺🇸🇺🇸🇺🇸
OpenAI's Sora is now the leader in the Artificial Analysis Video Generation Arena! After 3,710 appearances or 'battles' in the arena over the past 2 days, Sora now has an ELO score of 1,151. This places it as the clear #1 in the Artificial Analysis Video Generation Arena…
Excited to open the floodgates! It's been very inspiring to see @rohanjamin lead the product team and bring a new product surface from 0-1.
sora.com signups are fully open
o1 feels truly magical. Give it a try with your favorite hard reasoning prompt
Today, I’m excited to share with you all the fruit of our effort at @OpenAI to create AI models capable of truly general reasoning: OpenAI's new o1 model series! (aka 🍓) Let me explain 🧵 1/
🎉🌐 Big news from @samaya_AI. We have two shiny new offices in #London & #MountainView 🏢, staffed with an incredible team of brilliant minds💡🚀. Check out our freshly launched website at samaya.ai 🌟
My friends and family in #Ukraine are NOT safe. Here's how you can help: - Call your governments and elected officials, demand imposing severe sanctions, demand sending more help to Ukraine. - Donate to Ukraine via official channels and organizations. Violence must stop.
I’m excited to present our paper CM3: A Causal Masked Multimodal Model of the Internet where we train a model that can do zero-shot unconditional/conditional image generation (PixelCNN/DALL-E), image-infilling/captioning, entity linking/disambig, summarization all with prompting!
CM3: A Causal Masked Multimodal Model of the Internet abs: arxiv.org/abs/2201.07520 sota in zero-shot summarization, entity linking, and entity disambiguation. generate images unconditionally, conditioned on text (like DALL-E) and do captioning all in a zero-shot setting
Presenting VideoCLIP at #EMNLP2021 on Nov. 8, virtual poster II 08:30-10:00 and 6D 11:45-12:00 PST, joint work with Gargi Ghosh, Po-Yao Huang, @diametralis, @ArmenAgha, Florian Metze, @LukeZettlemoyer, @cfeichtenhofer paper arxiv.org/abs/2109.14084 code github.com/pytorch/fairse…
Recent work from our team where we collect the largest english natural QA dataset with around 130M QA pairs. HTML supervision is really useful! With awesome co-authors Patrick Huber (First Author), Barlas Oguz, @diametralis, @diametralis, Wen-tau Yih, @sonalsgupta, Xilun Chen
CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training abs: arxiv.org/abs/2110.07731 Using the readily available schema.org annotation, extract around 130 million multilingual question-answer pairs, including about 60 million English data-points
Since Transformer LMs were invented, we’ve wanted them to be able to read longer inputs during inference than they saw during training. Our Attention with Linear Biases enables this, in very few lines of code, without requiring extra params or runtime ofir.io/train_short_te… 🧵⬇
Excited to introduce DEMix layers, a module with domain "experts" that make a language model modular! You can mix, add, or remove experts, enabling rapid adaptation. 🧵👇 Paper: arxiv.org/abs/2108.05036 Work with @ml_perception, @universeinanegg, @nlpnoah, and @LukeZettlemoyer
I'm excited to announce our new pre-training paper: HTLM: Hyper-Text Pre-Training and Prompting of Language Models (arxiv.org/abs/2107.06955) where we unlock new ways of priming and automatically generating prompts by pre-training on simplified HTML.
Facebook AI Research's sequence modeling library @fairseq has made it's twitter debut. Please follow for latest updates.
Art of the selfie perfected by our @MarsCuriosity rover w/ this new pic at a Martian dune: go.nasa.gov/1PnYoL1
Read a letter I penned to my 25 year-old self: virg.in/l25