The paper said that after an AI tool was implemented at a large materials-science lab, researchers discovered significantly more materials—a result that suggested that, in certain settings, AI could substantially improve worker productivity. That paper, by Aidan Toner-Rodgers, was covered by The Wall Street Journal and other media outlets.
The paper was championed by MIT economists Daron Acemoglu, who won the 2024 economics Nobel, and David Autor.
In a press release, MIT said it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.”
The university said the author of the paper is no longer at MIT.
Or just the average AI: hallucinations galore. If you can’t trust the output it confidently gives you, what’s even the fucking point!?
For LLMs, yes.
But, theoretically, AI should be extremely good at sifting through mountains of data, and much faster than all other methods we have, identifying which data a human should take a closer look at. That’s what I presume this paper supposedly demonstrated.
My guess here is that a lazy student decided to take the easy path and fake data to “demonstrate” results that nobody would be surprised by and want to look closer at the data, but somebody looked anyway, probably because the student was a known slacker, and it wasn’t the results of the research that surprised them, but just that the student did the research at all.
Thank you. As useful as LLMs can be under certain circumstances, they are not the only type of AI.