

Look at it, Doodle AG Scheduling Software Pro Annual Subscription per User, and weep
Look at it, Doodle AG Scheduling Software Pro Annual Subscription per User, and weep
Ye, so essentially a wireless Avada Kedavra, cool cool cool, completely chill and sane thing to believe
195 IQ and suddenly get someone who just sits in their room for a decade and then speaks gibberish into a youtube livestream and everyone dies, or whatever.
I can’t even decipher what this is about. Like if you’re 195IQ you can invent Avada Kedavra in a decade?
Artificial wombs may remove this bottleneck.
Okay but this is an amazing out-of-context sentence. I will croudfund a $1000 award for anyone who is able to put that sentence into a paper and get published in Nature without anyone noticing.
I don’t think Harry was much of a genius, unless you mean Harriezer from MoR in which case lol, lmao
Working in the field of genetics is a bizarre experience
How the fuck would you know that, mate? You don’t even have a degree in your field, which, let me remind you, is (allegedly) computer science. Has Yud ever been near an actual genetics professor?
I feel coding people like they’re software
Jesus christ can you imagine segfaulting someone’s kidney
It’s reacting to the presentation, not you specifically. I think many of the other comments hit on how he goes waaay too far in his criticism, but I wouldn’t have written what I wrote if it wasn’t a wider sentiment I encountered a few times already.
The attitude to theoretical computer science re quantum is really weird. Some people act as if “I can’t run it now therefore it’s garbage” which is just such a nonsense approach to any kind of theoretical work.
Turing wrote his seminal paper in 1936, over 10 years before we invented transistors. Most of CS theory was developed way before computers were proliferated. A lot of research into ML was done way before we had enough data and computational power to actually run e.g. neural networks.
Theoretical CS doesn’t need to be recent, it doesn’t need to run, and it’s not shackled to the current engineering state of the art, and all of that is good and by design. Let the theoreticians write their fucking theorems. No one writing a theoretical paper makes any kinds of promises that the described algorithm will EVER be run on anything. Quantum complexity theory, for example was developed in the nineties, there was NO quantum computer then, no one was even envisioning a quantum computation happening in physical reality. Shor’s algorithm was devised BEFORE THAT, before we even had the necessary tools to describe its complexity.
I find the line of argumentation “this is worthless because we don’t know a quantum computer is engineeringly feasible”
The reason is that any government mandated ID is clearly the Mark of the Beast and will be used to bring upon a thousand years of darkness.
You think that’s fringe nonsense and you’d be right on the nonsense part, but that’s literally what Ronny Reagan said while he was president
1970s probably?
Self-reporting studies are, in fact, studies.
In case of the revolutionary LLM technology we have quality in = garbage out also!
Were you invited to the lavish opening party with the flamingos and the dancers at the huge mansion with two pools and all that?
If yes then you’re definitely the mark.
The real test is whether you’re included in the 5 person Signal group they coordinate the date and time to dump
I’m sure a bunch of people buy knowing it’ll collapse, but thing they’re so smart and savvy they’ll sell it just in time to get rich
a thermodynamics startup
what
Like what do they do, find ways to increase entropy faster? Or are they bootstrapping thermodynamics from first principles to disrupt the field of physics with blockchain-powered quantum synergy
Correct answers are correct answers. The only thing LLMs typically are bad at, are things that are seldom discussed or have some ambiguity behind them.
Lol what, how many questions you ask in your life are entirely unambiguous and devoid of nuance? That sounds like a you issue.
but i still think that it’s a little suspect on the grounds that we have no idea how many times they had to restart training due to the model borking, other experiments and hidden cost
Oh ye, I totally agree on this one. This entire genAI enterprise insults me on a fundamental level as a CS researcher, there’s zero transparency or reproducibility, no one reviews these claims, it’s a complete shitshow from terrible, terrible benchmarks, through shoddy methodology, up to untestable and bonkers claims.
I have zero good faith for the press, though, they’re experts in painting any and all tech claims in the best light possible like their lives fucking depend on it. We wouldn’t be where we are right now if anyone at any “reputable” newspaper like WSJ asked one (1) question to Sam Altman like 3 years ago.
Okay I mean, I hate to somehow come to the defense of a slop company? But WSJ saying nonsense is really not their fault, like even that particular quote clearly says “DeepSeek said training one” cost $5.6M. That’s just a true statement. No one in their right mind includes the capital expenditure in that, the same way when you say “it took us 100h to train a model” that doesn’t include building a data center in those 100h.
Beside whether they actually lied or not, it’s still immensely funny to me that they could’ve just told a blatant lie nobody factchecked and it shook the market to the fucking core wiping off like billions in valuation. Very real market based on very real fundamentals run by very serious adults.
It’s infinite monkeys but every time they output coherent English you give them bananas to incentivise them towards that