Liling Tan
@alvations
Code, geek, game
We need more of these!! #llm #nlproc We've came a long way from hpcwire.com/2012/06/28/sup…
Veo 3 room explosions 🤯 Turns out this works for making themed rooms starring brands and characters! I spent several hours and tons of credits optimizing this prompt so you don't have to 👇
This is super cool. #llm #ethicalhacking huggingface.co/papers/2507.09…
Spoiler #nlproc . . . . . . . . . . . . . . . . . . "Size doesn't always matter: A 400M encoder beats a 1B decoder on classification tasks, while a 400M decoder beats a 1B encoder on generation tasks."
Some of the ModernBERT team is back with new encoder models: Ettin, ranging from tiny to small: 17M, 32M, 68M, 150M, 400M & 1B parameters. They also trained decoder models & checked if decoders could classify & if encoders could generate. Details in 🧵: