g
ggml
@ggml_org
AI inference at the edge
Joined April 2025
0Following
2KFollowers
Pinned
g
g
g
g
g
g
g
g
g
ggml Retweeted
V
Vaibhav (VB) Srivastav@reach_vb · May 26
You really can just do things! Use *any* Hugging Face space as a MCP server along with your Local Models! 🔥 Here in we use Qwen 3 30B A3B with @ggml_org llama.cpp and @huggingface tiny agents to create images via FLUX powered by ZeroGPU ⚡ It's quite a bit crazy to see local…
4
43
259
314
38.0K
ggml Retweeted
X
g
g
g
ggml Retweeted
X
Xuan-Son Nguyen@ngxson · May 12
Real-time webcam demo with @huggingface SmolVLM and @ggml_org llama.cpp server. All running locally on a Macbook M3
208
1.0K
12.0K
8.0K
962.0K