Nathaniel Haines
@Nate__Haines
Paid to do p(b | a) p(a) p(a | b) = ——————— p(b) Data Science @ Ledger Investing
1/N Some New Years reading to share! In this post, we dive into Cronbach's alpha, Fisher info, KL divergence, and Bayes factors as measures of item informativeness. We then use these metrics to reduce a large 100 item pool down to just 15 items while maximizing information 🤖
WOAH! This works super well. The reduced set of 30 items (from a full 224 item set) shows correlations of r >= .88 with the full set across all 11 factors in the model 🤓🤖. Some examples below. I am actually very surprised! Now, to make sure I didn't make any mistakes.. 🤔
abandon tomato, embrace tomoto
"Abandon probabilistic models, work on the thing that's even harder to work with and scale than probabilistic models."
SIX YEARS after the initial blog post, this paper is finally published.. what a wild ride
1/7 In 2017, Hedge, Powell, and Sumner showed that robust cognitive tasks are unreliable, which calls into question the use of behavioral tasks for studying individual differences. In this blog post, I show that this conclusion is misguided (haines-lab.com/post/thinking-…)
you all failed
time for a valentines day pop-quiz! without cheating, what will the output of `results` look like?
time for a valentines day pop-quiz! without cheating, what will the output of `results` look like?

not me continuing to ring the "hierarchical modeling is the best" bell, but now in actuarial science as opposed to psych/cog/neuro 🤓
Diving into the actuarial literature for my new job, I stumbled upon "credibility theory", which is essentially just reliability theory from psychometrics developed within a different context. I really love seeing these theoretical links across fields 🤓 en.wikipedia.org/wiki/Credibili…
You can’t post a random picture and expect people to understand it.
You can’t post a random picture and expect people to understand it.
slow walker take
HOW TO BULLSHIT WITH STATISTICS This data was processed by blind statisticians. Look at the cloud WITHOUT his line and you will see that it is entirely dominated by noise; it may apply to monstrously large samples but practically to no single individual.
1% AGI confirmed
twitter hype is out of control again. we are not gonna deploy AGI next month, nor have we built it. we have some very cool stuff for you but pls chill and cut your expectations 100x!
Stan user looking for an actuarial job: discourse.mc-stan.org/t/actuaries-th…
as i write my response to reviewers, i am reminded of old twitter, where i could post hot statistic takes and be filled with energy to engage, posting endlessly about the misgivings of the linear probability model, the analogies between priors and data falsification, etc. ..alas
So, scale development researchers, why is the literature on quantitative item reduction strategies so scarce? One would think that for something so important for social science, I would not have had to roll my own algorithm to do accomplish something like this 🤔
1/N Some New Years reading to share! In this post, we dive into Cronbach's alpha, Fisher info, KL divergence, and Bayes factors as measures of item informativeness. We then use these metrics to reduce a large 100 item pool down to just 15 items while maximizing information 🤖
In case you missed it yesterday! Come for the alpha-hacking drama, stay for the Bayes 😈
1/N Some New Years reading to share! In this post, we dive into Cronbach's alpha, Fisher info, KL divergence, and Bayes factors as measures of item informativeness. We then use these metrics to reduce a large 100 item pool down to just 15 items while maximizing information 🤖