Discovery with a Computer Isn’t Discovery


I've talked about this sort of thing before, but here we go again.

Computational models in chemistry are cool and useful. They predict and explain things in a way that's difficult or impossible to do at the bench. But I take issue with the title of this recent paper in JACS: "Computational Discovery of Stable Metal–Organic Frameworks for Methane-to-Methanol Catalysis" (emphasis mine). The authors have done no such thing.

This paper describes a workflow where a database of MOFs is mined for this-and-that feature, and computational methods are used to predict which ones would be good catalysts. Some of them are probably good catalysts, according to their DFT models. Discovery!

Except they didn't actually do anything. There are no turnover numbers or yields or anything like that becausd they didn't run any reactions. They didn't find a collaborator to run any reactions. They suggest in the manuscript that, well, someone should try these things. But it seems like they consider the matter closed because their models say it should work. The folks at JACS agree, I guess.

That isn't a discovery. It's not even a result. It's a hypothesis. One that needs to be tested before anyone can claim a discovery.

It's especially frustrating to compare it to another paper in the same batch of ASAPs. In that one, the authors look at crystal structures and try to find something that will do the reaction they want. But of course, they designed a bunch and ran them to see which worked best. I'm sure their models told them which one would work best. But then they went and did the thing. Made them and tested them. Optimized them based on results. And in the end, isolated 35 mg at 93% yield, 99:1 dr, 3.5:96.5 er, and 5000 catalyst turnovers. Now that is some science.

Modeled, predicted results aren't results. They're hypotheses.