• Concerns rise as Neuralink fails to provide evidence of brain implant success, raising safety and transparency questions.
• Controversy surrounds Neuralink’s lack of data on surgical capabilities and alarming treatment of monkeys with brain implants.
• While Neuralink touts achievements, experts question true innovation and highlight developments in other brain implant projects.
I really wonder about the Doctors associated with this. How are they squaring things with their Hippocratic oath? This just seems really close to the ethical line, maybe over it. Nothing about how musk is treating this surprises me. But is everyone working on this also an unethical twat? Kind of scary to think that might be true.
In 1973 the U.S. Supreme Court rejected the Hippocratic Oath saying it didn’t cover the latest developments in medical practice.
I’m just… gonna go scream into a pillow in the corner now.
The Helsinki declaration https://en.m.wikipedia.org/wiki/Declaration_of_Helsinki
Is the reference for health sciences these days.
This appears to be more geared towards experimentation. Super interesting and more relevant to the article for sure though!
There’s nothing here that would violate it anyway. These people are literally working on tech to help quadriplegics. Even this article is mostly just “I wish they were more open about their research”, which is true of basically every research hospital in the world.
I mean… That’s the claim, but there’s no real explanation on how their implant could help quadriplegics more so than the current computer brain interface we’ve had for +10 years.
Computer brain interfaces have been around for years, the only novel idea is making it into a permanent implant. That being said, novel doesn’t necessarily mean good.
Are other forms of BCI not permanent? I was kinda under the impression that they were, and the only upside of neuralink was the form factor, and maybe trying to bring down the costs by automating it, or whatever the idea was, but it the others aren’t permanent, that would kind of make more sense. Though, I kind think it being temporary would kind of be an upside, for the most part, since that would prevent scar tissue buildup on the brain, and other potential problems like that.
No, typically they’re just sensors on a cranial harness.
Yes, there’s no real advantage to making it permanent other than convenience. However this convenience is imo massively outweighed by the very real possibility of meningitis. It’s crazy that they got approval to transect the blood brain barrier for an implant. Other implants do this, but that risk is being weighed against things like potentially deadly seizures, not mild convenience.
Do you mean EEG stuff, or are you referring to like, inter-cranial implants, which I don’t know shit about?
Do you mean counteracting potentially deadly seizures, or causing them? Also, there’s probably too many other problems to list about the technology generally, but since you seem like you know what you’re talking about, could you give me like, a kind of general overview on BCI, or your opinion? Maybe like, challenges, what you see as being the most promising stuff, that sort of thing?
For the most part, yes. If we’re just needing enough input to control something like a mouse, then there’s no real reason to go with an invasive implant. You can pull the same data from eeg and ocular tracking.
It would be counteracting seizures.
The problem with BCI is that there’s just not a lot of uses for them. The quadriplegic community is already small, and their range of cognitive ability runs the gamut. So creating a cbi that is useful to the entire patient population is going to be tough. The largest obstacle would be patient education, and training care takers.
This is part of the reason I discount Musks interest in BCI as medical device, there’s just no money in it. I think his only real motivation is to sell it to gullible wealthy people.
Another inherent problem with BCI is that it’s not seamless. It takes a lot more concentration to operate a mouse with your mind than it does with your body. People don’t really understand how much of their movement is handled by their spinal chord instead of the brain.
People have a hard time utilizing interactive spaces when we separate them from physical input. Which is why a lot of people struggle with VR,. When your physical senses like proprioception don’t reflect the interactions the same as our visual senses we can become physically ill.
I always got more the sense that musk was looking more for some sort of, mass adoption for this technology. Ghost in the shell, matrix type shit, that we’re still probably like, a century away from. If we don’t boil ourselves first, anyways. But that also might be marketing mumbo jumbo from him, and none of that really kind of solves any of his short term problems that he’d have, which you’ve done a good job pointing out, and are probably more relevant.
The toughness of figuring out use is definitely a good point, and it’s one you see all over the place with all manner of disabilities. It’s sort of unnatural enough to learn how to use a keyboard and mouse already, and those are relatively simple technologies, which is to say nothing of the maybe months of training it would take to learn how to use a prosthetic limb. I think maybe kids, children, could learn and pick up on stuff much faster, but I really don’t think it would be a popular decision to decide to start testing your BCI on kids, even if you were to reach a state where it was benign, useful, and guaranteed to be stable.
I also think musk probably doesn’t understand how BCI probably won’t help much for easing human-computer interface, because it sort of, puts the onus of everything on the person, as being at fault for not being able to interface with the perfect, “flawless” machine, rather than just viewing them as another kind of being, with distinct, even somewhat hardwired limitations. Humans can’t really split their attention and do dual processing, they can only focus on one thing at a time, and that strikes me as a pretty big limitation on the amount of data that you’re going to be able to extract from someone with one of these interfaces, even if it was effortless to use. If you want them to be able to walk around and still be a functional person, anyways, and not be insane and schizophrenic maybe. I think we also have been saying that we can solve a lot of those processing problems much easier on the computer side with these horrible organoids that are stitched to mice and computers and stuff. So that would be pretty neat.
In any case, to me, this would all seem to be a little bit overkill, for those intentions, when you could just get everyone to learn stenotype, if you really wanted to “increase output”. Which, again, I’m not sure would really work.
That’s also all taking musk strictly at face value on his intentions, but I’m pretty sure the guy likes rockets and electric cars because he has a retrofuturist “I’m the great man of history” kind of deal going on, so I don’t think I’d put it past him to think that having a plug that goes into your brain and puts you in the matrix would be a “cool” idea.
Ah yes, the classic “unless you think it will have a long-term benefit to someone else” exception to “do no harm”. I always forget about that part. /s
deleted by creator
The Hippocratic Oath is not a legally binding oath, and many doctors are not required to take this oath or any oath for that matter. Basically, at the end of the day, oaths only matter to the people who have the strength of character to hold to them no matter the cost and most people do not have that strength of character. Oaths mean nothing to those people when it comes down to it, it’s just a thing that you said once, nothing more.
Oh I know all that, but still…
Can’t wait to see the medical drama where one doctor says “you took an oath, god dammt!” And for the other to reply “nope”
There are way less extreme example of doctors just fucking things up for a bag of money.
And more in general, humans. Imagine if Clarence Thomas had taken medicine instead of law when he was young
People with the Power to do cruel things always find cruel people to do their bidding. Especially when they can justify it with science or it’s “for the better of humanity”. Even if every rational out stander is horrified by their doings.
Ethics only matters when there’s an effort to enforce it. The Hippocratic oath is just a reason your employer can fire you for making risky decisions. It means nothing if nobody holds you to it.
If you’re a doctor working for Neuralink, nobody will expect anything of you but to push the project forward as quickly as possible. For years you only work with monkeys, and when they do finally put a human in the O.R. it’s someone who signed away all their rights and accepted all risks to install experimental brain chips. At that moment, that human patient becomes the single most important subject in the entire experiment.
Of course you do it. You’re getting paid more money than you ever have in your life to do it, and the entire system is designed to protect you so long as you do what the boss says.
People are still people. Doctors are just as susceptible to compromising their ethics as everyone else, the only difference is that they probably have a higher bribe threshold.
I wouldn’t be surprised if there somehow were a cover-up of safety and efficacy of these devices.
Well, it’s possible that it was a robot doctor, kinda doubt they took a Hippocratic oath
deleted by creator