Steven Liu
@stevhliu
docs @huggingface 🤗 | sucking at something is the first step towards being sorta good at something
Super excited to start my first day as a technical writer @huggingface! Feels like the first day of school all over again 🤗
my #1 tip for writing docs is avoid using "You can..." its indirect and burdens the reader with deciding whether to do the thing or not: You can increase inference speed with torch.compile highlight the action and benefit instead: Use torch.compile to increase inference speed
Thrilled to finally share what we've been working on for months at @huggingface 🤝@pollenrobotics Our first robot: Reachy Mini A dream come true: cute and low priced, hackable yet easy to use, powered by open-source and the infinite community. Tiny price, small size, huge…
every model release should include a blueprint like this! 🤩
Everything you need to know is in our engineering blueprint
Holy... `transformers` reached 1B downloads 😭 thanks everyone for making this possible, what an amazing community
HF team added up to date semantic search tool on all of @huggingface products (hub, inference) & libraries (transformers, diffusers). Below, I'm asking about the new Xet storage technology that is makes it faster to upload/download models from hub. Go to your MCP settings to…
I have bittersweet news to share. 😢 We are closing down huggingchat for now. Huggingchat was launched in April 2023 when ChatGPT was just 5 months old and it was still quite hard to deploy a similar service. If you are looking for a cool alternative interface you should take…
BOOOM! transformers now has a baked-in http server w/ OpenAI spec compatible API Launch it with `transformers serve` and connect your favorite apps. Here I'm running @jandotai with local transformers and hot-swappable models. There is preliminary tool call support as well!
Pour yourself some wine and watch me speak for the first time ever (if I flop, at least your wine won't) about how Transformers/Diffusers reduces memory usage for really big models!
We got a visitor to the @huggingface office today. @pollenrobotics & @LeRobotHF meetings!
Do you want to quickly measure your connection speed to the @huggingface infrastructure? Read on 🧵 We have been improving our infrastructure significantly in the past few months. We have started: · deploying smaller points of presence closer to all users across the world,…
For years, we've been saying that bigger isn't always better for AI and that smaller specialized models are usually faster, cheaper and more accurate for your specific constraints. So super happy to release the long-overdue capability of finding the best model based on size on…
I have bittersweet news to share. Yesterday we merged a PR deprecating TensorFlow and Flax support in transformers. Going forward, we're focusing all our efforts on PyTorch to remove a lot of the bloating in the transformers library. Expect a simpler toolkit, across the board.
Super proud to release our most fun AI experiment to date: AISheets 🗒️ Thousands of AI models meet spreadsheets. Build, analyze, and automate your data using open-source LLMs in one slick, fast and simple app. Surprisingly powerful! 🚀 Try it: hf.co/aisheets
The Transformers library is undergoing it's largest pivot to date 🙌 It now cements its role as the central model definition, irrespective of the backend and runner. One ground truth to bring more reliability across the ecosystem. Why is this important?
This is big: the latest transformers release now automatically switches to optimized kernels when the hardware permits ❤️🔥 We integrate the `kernels` library for the most popular models (Llama-like), making use of the most popular community kernels available on the HF Hub.