Minds

"We need to reward risky, paradigm-shifting research rather than rejecting it for missing SOTA results early."

CuspAI’s Max Welling on physical AI, rejected papers, and his new open source tool for materials science.

By Stepan Kravchenko
Stepan Kravchenko
Head of Nebius Science

Max Welling is one of the leading figures at the intersection of AI and materials science. Co-founder and CTO at CuspAI, with advisors including Yann LeCun and Geoffrey Hinton, Welling is going after an extremely hard problem in applied science: using AI to discover materials with precisely targeted properties.

CuspAI’s first major focus was direct carbon capture — developing materials that can pull CO₂ from the air and help offset industrial impact on the climate. Later it has expanded to areas including water purification, semiconductors, catalysts and more. The startup aims to build different models, both closed and open source, to address the growing need for new materials.

At ICLR last week, Welling presented kUPS, CuspAI’s first open-source tool and what the company describes as a molecular simulation engine for the AI era. Where traditional simulation packages were built for CPUs and a single method at a time, kUPS runs natively on GPU, handles multiple techniques within a single framework, letting researchers run simulations at quantum accuracy but at a fraction of the usual computational cost.

A week earlier, Welling had introduced his technology at AI DNA, a Nebius Academy event held in Amsterdam. After Welling’s keynote, we spoke with him about CuspAI’s work, Europe’s chances in the global race, and what it means to do science in the age of AI.

CuspAI has just launched its first open-source tool, kUPS. How do you see it developing in the coming months?

The tool is now fully functional, and people should be able to download it and play with it. Their feedback is going to be the most important thing.

It’s a tool where you can seamlessly run all the different ensembles that people use for molecular simulations. It runs very efficiently on GPU, and perhaps more importantly, you can seamlessly integrate machine learning force fields so that you can run with the accuracy of quantum DFT (Density Functional Theory, a simulation method used in materials science), but at the speed of classical force fields.

You say kUPS will give its users the foundation to move faster. What will the users give to kUPS in return?

As a startup it is good to be embedded in the research community. You rely on its scientific advances and open source tools. Moreover, you like to hire the best talent from universities after graduation. Besides giving back to the community, which only seems fair, open sourcing kUPS will foster this relationship and catalyse the research community to further improve the tool. Finally, having a good tool out there, showing the quality of what CuspAI can produce, is also helpful reputationally.

Max Welling’s full keynote at Nebius Academy AI DNA, Amsterdam

Your company is exploring how AI can address climate change. Do you believe that in the long run, AI’s climate benefits can outweigh its own footprint?

It is very hard to predict, to be honest. Of course, there are solutions for providing almost boundless energy, like nuclear fusion. I recently saw a company that uses molten salt and nuclear waste to generate energy in small modular reactors. People will come up with all sorts of new ways, new solar panel technologies, to create a lot of energy. Whether that’s enough to power all our AI needs, I don’t know — but I’m approaching this with a positive mindset. I hope companies like mine can make a real difference.

I also don’t know if the net effect of AI will end up being positive or negative. That is also a policy question. AI cannot solve it by itself. I can decide to work on problems that are good for the world, but policy and regulators will have to regulate uses that are bad for it — and that requires ongoing conversations with companies like mine.

Talking about direct air capture, what’s the hardest part of teaching an AI model to predict these materials? And how closely do predicted properties align with real-world targets?

It’s a question with many layers. There is a certain computation you need to do that is actually very expensive. We developed kUPS to do it more efficiently and faster on GPUs. You could call this hard because it’s expensive, but it’s also accurate enough for us.

What is maybe harder to predict is how a material will operate in the real world. Materials like metal-organic frameworks — some of them are not stable in humid environments. You can test them in dry conditions, and they work fine, but deploy them somewhere humid, and their behavior becomes unpredictable.

Predicting how a material works in a real device is hard. You would have to simulate the entire device or try it out. And then the techno-economics and lifecycle analysis are also very hard to predict. That last step of bringing a material out of the lab into the real world is a very difficult one.

You recently said we need to build energy-efficient AI, like our own brains. But the brain runs on 20 watts. Current models need a bit more. How do you see us closing that gap?

First, there needs to be a need for it. Right now, people are happy to pay the price because they can still scale. At some point it becomes too expensive, and then there’s pressure to make it cheaper.

There are many directions this can go. One is moving away from Von Neumann architectures, where much of the energy goes to reading and writing from DDR, the memory sitting outside the chip, which is by far the most expensive energy component of a computation. Bringing memory closer to compute is one architectural move. But you can go further.

You can move to analogue computation and relax the precision requirements of digital computation. Neuromorphic computing is another direction, it has always struggled to compete with digital, but if the pressure gets high enough and innovation follows, new hardware becomes viable.

In 2023 you co-authored “Scientific Discovery in the Age of Artificial Intelligence” in Nature. The field has shifted since. If you could revise the article today, what would you add?

I think at that point the physical AI revolution wasn’t so clear in my mind, the fact that the real world creates so much friction that will hold AI back for a while. In other words, finding ways to connect the physical and the digital worlds. Because, in the end, we want things in the physical world.

Our tools, gadgets, cars, robots… We want those to be powered by AI. I think it’s going to play out harder than we expect. And focusing on that interface is incredibly important. I don’t think that was very clearly articulated in that paper.

Are we entering an era where science is less about understanding the world and more about predicting it?

I think those things are not necessarily different, although I see what you’re saying. It depends on the application.

For certain applications, you really need to understand, because it impacts human lives. You can’t just say “you won’t go on parole because this algorithm predicted you will remain a criminal in the future.” You have to explain why. For other problems, like predicting the stock market, people don’t truly care how it was predicted as long as it’s right.

A nice intermediate is a medical diagnosis. You want to understand why the diagnosis was made, but it’s also very useful to simply know it, it helps you decide what to do next. Every application will have its own balance of human understanding and prediction.

I would also say a machine needs to understand why it predicts. In its own way, it will. Otherwise it cannot predict. It may not be able to explain that to humans. There is a translation gap.

You’ve argued that papers shouldn’t be rejected because researchers have less access to compute. Is this a big problem right now?

I think so. We are too focused on incremental research that produces bold numbers. Many of my students complain that they have great ideas but don’t have the compute to show they’ll eventually outperform what the big labs produce.

Good ideas don’t get a chance to incubate. Papers get rejected from conferences, and students get disillusioned. We need ways to reward risky, paradigm-shifting research rather than rejecting it for not achieving SOTA results at an early stage. Reviewing these days is also often done by agents, and people don’t invest much time in it because there’s no credit in it. That creates a bad feedback loop.

You’ve also argued that Europe must build its own AI research rather than playing catch-up with frontier LLMs from China and the US. What should Europe actually be doing?

The first thing Europe needs to do is remove friction in regulation. I’m not saying AI shouldn’t be regulated, but we make it too hard for companies to scale. Different jurisdictions, different labour laws, different certification regimes across sectors. It’s all different, and this means it’s impossible to scale here.

Once that’s addressed, Europe needs to identify strategic directions where it can create leverage compared to China and the US, a technology stack that is indispensable to them. ASML is a great example: a monopoly on chip-making machines. We need more of those. Clean energy is a clear candidate. Chips and semiconductors more broadly. Biotechnology perhaps.

Then we need to create mini-CERNs, collaborative innovation hubs around these strategic topics. Europe is actually very good at building environments like that, and maybe Nebius can put some compute there so that we can do innovation there and spin out startups and help the industry to create IP.

How do we make sure that educational standards don’t degrade as AI takes over, and that we don’t get this new generation of researchers who are actually not as good as the old guys?

It’s a really difficult question, and it has happened throughout history with every new technology. We lose certain skills because technology takes over. But this time it’s cognitive skills, which could be a very different class.

We have not been able to protect ourselves in the past from losing these skills. And probably we won’t be able to do this in the future unless we make these skills highly valuable.

It may just happen that if people stop coding and leave it all to agents, we’ll eventually produce so much bad code that it crashes onto itself, and at that point, those skills become valuable again. Forcing people to develop these skills is going to be very difficult.

My hope is that people realise early enough that deep expertise still matters, especially for paradigm-shifting work. I don’t think AI can fulfil that creative role just yet.

AI has led to what some call vibe coding. Do you see a similar shift toward vibe science, when people without formal training are meaningfully contributing to science?

Vibe science, I haven’t thought about that, but I definitely vibe science myself. It is a great way to learn about a new topic. You ask questions, keep asking, ask for the details and the equations. It can be very deep, not superficial at all. But you do need the math background.

With that in place, these tools let you expand your scientific horizon into very different areas. But without knowing the math, without knowing the scientific method, it’s very hard to use the tools. So get a solid foundation first, and then start using the tools.

Sign in to save this post