Max David Gupta
@MaxDavidGupta1
Computational-Cognitive-Scientist-In-Training CS @ Princeton Math @ Columbia
Jung: "Never do human beings speculate more, or have more opinions, than about things which they do not understand" This rings of truth for me today - I'm grateful to be a part of institutions that prefer the scientific method to wanton speculation
Love this take on RL in day-to-day life (mimesis is such a silent killer):
Becoming an RL diehard in the past year and thinking about RL for most of my waking hours inadvertently taught me an important lesson about how to live my own life. One of the big concepts in RL is that you always want to be “on-policy”: instead of mimicking other people’s…
ICML is everyone's chance to revisit the days we peaked in HS multi-variable calc
I am starting to think sycophancy is going to be a bigger problem than pure hallucination as LLMs improve. Models that won’t tell you directly when you are wrong (and justify your correctness) are ultimately more dangerous to decision-making than models that are sometimes wrong.
𝐍𝐨, 𝐲𝐨𝐮𝐫 𝐛𝐫𝐚𝐢𝐧 𝐝𝐨𝐞𝐬 𝐧𝐨𝐭 𝐩𝐞𝐫𝐟𝐨𝐫𝐦 𝐛𝐞𝐭𝐭𝐞𝐫 𝐚𝐟𝐭𝐞𝐫 𝐋𝐋𝐌 𝐨𝐫 𝐝𝐮𝐫𝐢𝐧𝐠 𝐋𝐋𝐌 𝐮𝐬𝐞. Check our paper: "Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task" : brainonllm.com
🤖🧠Paper out in Nature Communications! 🧠🤖 Bayesian models can learn rapidly. Neural networks can handle messy, naturalistic data. How can we combine these strengths? Our answer: Use meta-learning to distill Bayesian priors into a neural network! nature.com/articles/s4146… 1/n
can ideas from hard negative mining from contrastive learning play into generating valid counterfactual reasoning paths? or am I way off base? curious to hear what people think