Same. I’m not being critical of lab-grown meat. I think it’s a great idea.
But the pattern of things he’s got an opinion on suggests a familiarity with rationalist/EA/accelerationist/TPOT ideas.
Do you have a link? I’m interested. (Also, I see you posted something similar a couple hours before I did. Sorry I missed that!)
So it turns out the healthcare assassin has some… boutique… views. (Yeah, I know, shocker.) Things he seems to be into:
How soon until someone finds his LessWrong profile?
As anyone who’s been paying attention already knows, LLMs are merely mimics that provide the “illusion of understanding”.
I’m noticing that people who criticize him on that subreddit are being downvoted, while he’s being upvoted.
I wouldn’t be surprised if, as part of his prodigious self-promotion of this overlong and tendentious screed, he’s steered some of his more sympathetic followers to some of these forums.
Actually it’s the wikipedia subreddit thread I meant to refer to.
Trace seems a bit… emotional. You ok, Trace?
But will my insurance cover a visit to Dr. Spicy Autocomplete?
So now Steve Sailer has shown up in this essay’s comments, complaining about how Wikipedia has been unfairly stifling scientific racism.
Birds of a feather and all that, I guess.
what is the entire point of singling out Gerard for this?
He’s playing to his audience, which includes a substantial number of people with lifetime subscriptions to the Unz Review, Taki’s crapazine and Mankind Quarterly.
why it has to be quite that long
Welcome to the rationalist-sphere.
Scott Alexander, by far the most popular rationalist writer besides perhaps Yudkowsky himself, had written the most comprehensive rebuttal of neoreactionary claims on the internet.
Hey Trace, since you’re undoubtedly reading this thread, I’d like to make a plea. I know Scott Alexander Siskind is one of your personal heroes, but maybe you should consider digging up some dirt in his direction too. You might learn a thing or two.
Stephen Jay Gould’s The Mismeasure of Man is always a good place to start.
This is good:
Take the sequence {1,2,3,4,x}. What should x be? Only someone who is clueless about induction would answer 5 as if it were the only answer (see Goodman’s problem in a philosophy textbook or ask your closest Fat Tony) [Note: We can also apply here Wittgenstein’s rule-following problem, which states that any of an infinite number of functions is compatible with any finite sequence. Source: Paul Bogossian]. Not only clueless, but obedient enough to want to think in a certain way.
Also this:
If, as psychologists show, MDs and academics tend to have a higher “IQ” that is slightly informative (higher, but on a noisy average), it is largely because to get into schools you need to score on a test similar to “IQ”. The mere presence of such a filter increases the visible mean and lower the visible variance. Probability and statistics confuse fools.
And:
If someone came up w/a numerical“Well Being Quotient” WBQ or “Sleep Quotient”, SQ, trying to mimic temperature or a physical quantity, you’d find it absurd. But put enough academics w/physics envy and race hatred on it and it will become an official measure.
“TempleOS on the blockchain”
Ok that’s some quality sneer. A bit obscure and esoteric, but otherwise perfect for those who know anything about Temple OS.
Yeah, Behe’s one of the leading lights (dimmest bulbs?) of the so-called “Intelligent Design” movement: a molecular biologist who knows just enough molecular biology to construct strawmen arguments about evolution. Siskind being impressed by him tells me everything I need to know about Siskind’s susceptibility to truly stupid ideas.
Let them fight. https://openai.com/index/elon-musk-wanted-an-openai-for-profit/