- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
The comments come amid increased attention on a global AI race between the U.S. and China.
the race toward AGI
lol
Right?
If I was OpenAI, this exactly the kind of thing I’d want written about me, especially the day after the deepseek thing….just saying.
Its probably part of the standard severance package. Hand in your laptop, sign an NDA, take your COBRA paperwork, and fill out the AGI terror press release.
Are we not heading towards AGI then?
In the same way that if you start digging a hole in northwestern Spain you are heading towards New Zealand.
That doesn’t sound right at all, comparing AGI to digging a hole from Spain to New Zealand is hyperbolic. Sounds like more like “electricity will never cover the whole world, maybe one day it’ll have an impact, but powering cars and homes? No way”. AGI and SGI is almost our only way to communism, with Deepseek and other open source models capitalists won’t be able to keep up, especially if AGI becomes available to your average person. In a few years, hell just one year alone, LLMs have made substantial progress that we can only assume it will continue to grow. Acting as if though AGI is like fusion generators is naive, unlike containing the sun, AGI is far more possible because it is. There’s no stopping it at this point, my professor told me that they have stopped trying to catch AI as a university because it’s impossible to do so now, unless you’re a child and just copy everything and it’s obvious. It’s time to stop assuming AGI will never come because it will, and it is.
The difference here is that you’re never going to reach New Zealand that way but incremental improvements in AI will eventually get you to AGI*
*Unless intelligence is substrate dependent and cannot be replicated in silica or that we destroy ourselves before we get there
It’s very easy with an incremental improvement tactic to get stuck in a local maximum. You’ve then hit a dead end, every available option leads to a degredation and thus isn’t viable. It isn’t a sure thing incremental improvements lead to the desired outcome.
I simply cannot imagine a situation where we reach a local maximum and get stuck in it for the rest of human history. There’s always someone else trying a new approach. We will not stop trying to improve our technology. Even just simply knowing what doesn’t work is a step in the right direction.
We already know that General Intelligence is possible. The question that remains is wether it can be replicated artificially.
I can imagine it really easily for the foreseeable future, all that would need to happen is for the big corporations and well funded researchers to stick to optimizing LLMs and for that to be a dead end.
Yeah that’s not the rest of human history (unless the rest of it isn’t very much) but enough to make concerns about AGI into someone else’s problem.
(Edit, clarified)
Like I said; I’ve made no claims about the timeline. All I’ve said is that incremental improvements will lead to us getting there eventually.
By saying this aren’t you assuming that human civilization will last long enough to get there?
Look at the timeline of other species on this planet. Vast numbers of them are long extinct. They never evolved intelligence to our level. Only we did. Yet we know our intelligence is quite limited.
What took biology billions of years we’re attempting to do in a few generations (the project for AI began in the 1950s). Meanwhile the amount of non-renewable energy resources we’re consuming has hit exponential takeoff. Our political systems are straining and stretching to the breaking point.
And of course progress towards AI has not been steady with the project. There was an initial burst of success in the ‘50s followed by a long AI winter when researchers got stuck in a local maximum. It’s not at all clear to me that we haven’t entered a new local maximum with LLMs.
Do we even have a few more generations left to work on this?
I’m talking about AI development broadly, not just LLMs.
I also listed human extinction as one of the two possible scenarios in which we never reach AGI, the other being that there’s something unique about biological brains that cannot be replicated artificially.
That assumes that whatever we have now is a precursor to AGI. There’s no evidence of that.
What do you mean there’s no evidence? This seems like a difference of personal explanation of what AGI is where you can move the goal post as much as you want “it’s not really AGI until it can ___, ok just because it can do that doesn’t mean it’s AGI, AGI needs to be able to do _____”.
No, it doesn’t assume that at all. This statement would’ve been true even before electricity was invented and AI was just an idea.
AI in general yes. LLMs in particular, I very much doubt it.
Yeah not with LLMs though.
You can’t know that.
It is a common misconception that incremental improvements must equate to eventually achieving the goal, but it is perfectly possible that progress could be asymptotic and we never reach AGI even with constant “advancements”
Incremental improvements by definition mean that you’re moving towards something. It might take a long time but my comment made no claims about the timescale. There’s only two plausible scenarios that I can think of in which we don’t reach AGI and they’re mentioned in my comment.
That relies on the increments being the same. It’s much easier to accelerate from 0 to 60 mph than it is from 670,999,940 mph to C.
Would we know it if we saw it? Draw two eye spots on a wooden spoon amd people will anthromorphise it. I suspect we’ll have dozens of false starts and breathless announcements of AGI, but we may never get there.
More interestingly, would we want it if we got it? How long will its creators rally to its side if we throw yottabytes of data at our civilization-scale problems and the mavhine comes back with “build trains and eat the rich instead of cows?”
Would we know it if we saw it?
That seems besides the point when the question is about wether we’re getting closer to it or not.
deleted by creator
But objectively measured no? Is there no progress happening at all, or are we moving backwards? Because it’s either of those two or then we’re moving towards it.
The delusions of grandeur required to think your glorified auto complete is going to turn into a robot god is unreal. Just wish they’d quit boiling the planet.
Oh man 100% this.
A little while ago there was a thread about what people are actually using LLMs for. The best answer was that it can be used to soften language in emails. FFS.
alternatively, the delusions of grandeur required to think your opinion is more reliable than that of many of the leaders in the field
they’re not saying that LLM will be that thing; they’re saying that in the next 30 years, we could have a different kind of model - we already have the mixture of experts models, that that mirrors a lot of how our own brain processes information
once we get a model that is reliably able to improve itself (and that’s, again, not so different from adversarial training which we already do, and MLP to create and “join” the experts together) then things could take off very quickly
nobody is saying that LLMs will become AGI, but they’re saying that the core building blocks are theoretically there already, and it may only take a couple of break-throughs in how things are wired for a really fast explosion
I know the infinite conversation has only gotten better and better! 🤪
I also wish they’d put up or shut up, goddamn lol. Hopefully DeepSeek has lit some fires under some asses 🍑🔥
I love how Jon Stewart put it: AI is losing its job to AI.
I was super disappointed with his take this week in general though (which I see is reflected in the YouTube comments).
good decision, imo. sometimes i also get annoyed if the one i am watching becomes too sweaty completionists.
I happily welcome our merciful and benevolent machine overlords.
I’d take Skynet over what’s currently going on.
I think about this. Boy to bad we don’t have a general ai to run things given what we have gotten or maybe a nice interstellar race that got past the great filter can upload us and leave the planet to recover.