Adi Simhi
@AdiSimhi
NLProc, and machine learning. Ph.D. student @TechnionLive
LLMs often "hallucinate". But not all hallucinations are the same! This paper reveals two distinct types: (1) due to lack of knowledge and (2) hallucination despite knowing. Check out our new preprint, "Distinguishing Ignorance from Error in LLM Hallucinations"

Tried steering with SAEs and found that not all features behave as expected? Check out our new preprint - "SAEs Are Good for Steering - If You Select the Right Features" 🧵
Check out The Daily ML podcast to hear about our new paper! soundcloud.com/thedailyml/ep4…
LLMs often "hallucinate". But not all hallucinations are the same! This paper reveals two distinct types: (1) due to lack of knowledge and (2) hallucination despite knowing. Check out our new preprint, "Distinguishing Ignorance from Error in LLM Hallucinations"
Thanks to @LanceEliot, for featuring our paper in @Forbes! Curious about the causes of LLM hallucinations? check out our new preprint, "Distinguishing Ignorance from Error in LLM Hallucinations" arxiv.org/pdf/2410.22071 Link for article: forbes.com/sites/lanceeli…
Thanks, @omarsar0, for featuring our paper! Curious about the causes of LLM hallucinations? check out our new preprint, "Distinguishing Ignorance from Error in LLM Hallucinations" arxiv.org/pdf/2410.22071
Lots of papers on LLM hallucinations recently. Here are a few AI papers that caught my attention this week: (Bookmark to read later) Geometry of Concepts in LLMs Examines the geometric structure of concept representations in sparse autoencoders (SAEs) at three scales: 1)…