Aritra Roy Gosthipaty
@ariG23498
MLE @ 🤗 Hugging Face | @GoogleDevExpert in ML | ex-@PyImageSearch & @weights_biases | ex-@huggingface fellow | ex-contractor MLE @ Keras |
Bookmarked! hf.co/papers/trending
BREAKING: we've partnered with @metaai and @paperswithcode to build a successor to Papers with Code (which was sunsetted yesterday) PWC, founded by @rbstojnic and @rosstaylor90 has been an invaluable resource for AI scientists and engineers over the years (and an inspiration…
I swear there was one time I could look at a paper, read the author list and smile as I knew all of them from another paper ofcourse. It was a good get-together with the authors. Either I am not reading papers enough, or the research field has a huge entropy.
After using @FFmpeg for a while now, thought I'd write a quick blog post detailing everything you can do with it. Consider this a quick read that will try to convince you why you should be learning how to use @FFmpeg as a ML Engineer and why it's the most important toolbox for…
Fine-tune Gemma3n on videos with audios inside with Colab A100 🔥 Just dropped the notebook where you can learn how to fine-tune Gemma3n on images+audio+text at the same time!
Not every developer task requires a general-purpose LLM. We’re betting on specialized focal LLMs – smaller, faster, and focused. Join @jetbrains and @huggingface for a livestream on how focal models like Mellum will shape the industry. 📅 July 29, 6 pm CET 👉 Save your spot:…
The team behind the visual deserves a raise.
Our new state-of-the-art AI model Aeneas transforms how historians connect the past. 📜 Ancient inscriptions often lack context – it's like solving a puzzle with 90% of the pieces lost to time. It helps researchers interpret and situate inscriptions in their past context. 🧵
re-search is tiring and exhausting, it is not the shiny job that I used to think was easy i have a new born respect for all the researchers in my TL, thank you for doing what you do
VLMS + vLLM = 🔥 `vllm serve any-vlm-model --model_impl transformers` Read more: blog.vllm.ai/2025/04/11/tra…