I’ve gone down a rabbit hole here.
I’ve been looking at lk99, the potential room temp superconductor, lately. Then I came across an AI chat and decided to test it. I then asked it to propose a room temp superconductor and it suggested (NdBaCaCuO)_7(SrCuO_2)_2 and a means of production which got me thinking. It’s just a system for looking at patterns and answering the question. I’m not saying this has made anything new, but it seems to me eventually a chat AI would be able to suggest a new material fairly easily.
Has AI actually discovered or invented anything outside of it’s own computer industry and how close are we to it doing stuff humans haven’t done before?
It’s important to be clear what kind of actual system you’re using when you say “AI”.
If you’re talking about something like ChatGPT, you’re using an LLM, or “Large Language Model”. Its goal is to produce something that reasonably looks like a human wrote it. It has reviewed a ridiculous amount of human text, and has a metric assload of weights associating the relationships between these words.
If the LLM sees your question and associates a particular compound with superconductors, it’s because it’s seen these things related in other writings (directly or indirectly).
It’s important not to ascribe more intent behind what your seeing than exists. It can’t understand what a superconductor is or how materials can achieve the state, it’s just really good at relaying related words in a convincing manner
That’s not to say it isn’t cool or useful, or that ML(Machine Learning) can’t be used to help find answers to these kinds of questions.
Exactly. It’s just text prediction software that is really good at making itself sound plausible. It could tell you something completely false and have no idea it’s stating a lie. There’s no intelligence here. It’s a very precise word guesser. Which is great for specific settings. But there’s a huge amount of hype associated with this tool and it’s very much by design (by tech companies).
I’m not convinced of this. LLMs haven’t been just spitting out prior art, despite what some people seem to suggest. It’s not just auto-complete, that’s just a useful analogy.
For instance, I’m fascinated by the study that got GPT4 to draw a unicorn using LaTeX. It wasn’t great, but it was recognizable to us as a unicorn. And apparently that’s gotten better with iterations. GPT (presumably) has no idea what a unicorn looks like, except through text descriptions. Who knows how it goes from written descriptions of a mythical being to a 2d drawing with a markup language without being trained on images, imagery, or any concept of what things look like.
But also, this is true as well. I’m trying hard not to anthropomorphize this LLM but it sure seems like there’s some emergent effect that kind of looks like an intelligence to a layman like myself.
To be clear, I’m not trying to make the argument that it can only produce exactly what it’s seen, I recognize that this argument is frankly overstated in media. (The interviews with Adam Conover are great examples; he’s not wrong per se, but he does oversimplify things to the point that I think a lot of people misunderstand what’s being discussed)
The ability to recombine what it’s seen in different ways as an emergent property is interesting and provocative, but isn’t really what OP is asking about.
A better example of how LLMs can be useful in research like what OP described would be asking it to coalesce information from multiple existing studies about what properties correlate with superconducting in order to help accelerate research in collaboration with actual material scientists. This is all research that could be done without LLMs, or even without ML, but having a general way to parse and filter these kinds of documents is still incredibly powerful, and will be a sort of force multiplication for these researchers going forward.
My favorite example of the limitation on LLM’s is to ask it to coin a new word, then google that word. It physically is unable to produce a combination of letters that it doesn’t have indexed, and it doesn’t have an index for words it hasn’t seen. It might be able to create a new meaning for a word that it’s seen, but that isn’t necessarily the same.