Matthew Barnett
@MatthewJBar
Co-founder of @MechanizeWork Married to @natalia__coelho
Matthew from @EpochAIResearch on AI culture: So AIs could potentially have, they’ll start contributing to the culture in the same way that humans contribute to culture by writing and talking and interacting with others. So AIs will increasingly become part of the culture. And so…
We keep getting endless job applications from ML researchers, but we don't want researchers. We want traditional SWE talent. Why does it seem so hard to get good SWEs to apply to our SWE roles?
We are offering a $500k base salary for this role. That's not total compensation: we're paying equity on top of the $500k. If you know any highly experienced software engineers who might be a good fit, please reach out. It's totally fine if they don't have any experience in ML.
Many software engineers want to move into AI but think they need to learn ML first. We are offering an alternative: researcher-level pay but without any need for prior ML experience. We're seeking highly talented engineers with traditional SWE experience. x.com/i/jobs/1919892…
We are offering a $500k base salary for this role. That's not total compensation: we're paying equity on top of the $500k. If you know any highly experienced software engineers who might be a good fit, please reach out. It's totally fine if they don't have any experience in ML.
We're hiring software engineers. $500k base. x.com/i/jobs/1919892…
(1) Do you think most highly intelligent, goal-directed, long-term planning AIs developed in the future will likely be conscious? (2) Do you think most technologically advanced alien species elsewhere in the universe are likely conscious?
Footnote 9 operationalized this prediction. OpenAI didn't report OSWorld scores for ChatGPT Agent, but I notice that they reported a tiny improvement on the similar WebArena benchmark (62.9% -> 65.4%), suggesting that ChatGPT Agent falls short of the expectations from AI 2027.
What's missing in the AI safety literature is a cost-benefit framework for evaluating when we should do more AI safety work vs. proceed with AI development. Indeed, one often finds an implicit assumption that we should ~always do more safety work, as if there are no tradeoffs.
Hard to say consciousness is real if consciousness is what distinguishes non-zombies from zombies, who are physically identical. Like saying "there's something in this test tube that no test can ever detect"