I’ve gone down a rabbit hole here.

I’ve been looking at lk99, the potential room temp superconductor, lately. Then I came across an AI chat and decided to test it. I then asked it to propose a room temp superconductor and it suggested (NdBaCaCuO)_7(SrCuO_2)_2 and a means of production which got me thinking. It’s just a system for looking at patterns and answering the question. I’m not saying this has made anything new, but it seems to me eventually a chat AI would be able to suggest a new material fairly easily.

Has AI actually discovered or invented anything outside of it’s own computer industry and how close are we to it doing stuff humans haven’t done before?

  • nandeEbisu@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    I assume you are referring to transformers, which came out in the literature around 2017. Attention on its own is significantly older, but wasn’t really used in a context that came close to being used as a large language model until the early / mid 2010s.

    While attention is fairly simplistic, a trait which helps it parallelize well and scale well, there is a lot of research that came about recently around how the text is presented to the model, and the size of the models. There is also a lot of sophistication around instruction tuning and alignment as well which is how you get from simple text continuation to something that can answer questions. I don’t think you could make something like chatGPT using just the 2017 “Attention is All You Need” paper.

    I suspect that publicly released models lags whatever google or OpenAI has figured out by 6 months to a year, especially because there is now a lot of shareholder pressure around releasing LLM based products. Advancement that are developed in the open source community, like apply LoRA and quantization in various contexts, has a significantly shorter time between development and release.