In which we are joined by Ezri of Swampside Chats, to continue our discussion of "Computer Power and Human Reason: From Judgement to Calculation" by Joseph Weizenbaum.
Computer Power and Human Reason: From Judgment to Calculation (1976) by Joseph Weizenbaum displays the author's ambivalence towards computer technology and lays out the case that while artificial intelligence may be possible, we should never allow computers to make important decisions because computers will always lack human qualities such as compassion and wisdom.
Weizenbaum makes the crucial distinction between deciding and choosing. Deciding is a computational activity, something that can ultimately be programmed. It is the capacity to choose that ultimately makes one a human being. Choice, however, is the product of judgment, not calculation. Comprehensive human judgment is able to include non-mathematical factors such as emotions. Judgment can compare apples and oranges, and can do so without quantifying each fruit type and then reductively quantifying each to factors necessary for mathematical comparison.
If you like the show, consider supporting us on Patreon.
Links:
Computer Power and Human Reason on Wikipedia
Weizenbaum's Nightmares, on The Guardian
Inside the Very Human Origin of the Term “Artificial Intelligence”
General Intellect Unit on iTunes
http://generalintellectunit.net
Support the show on Patreon
https://twitter.com/giunitpod
General Intellect Unit on Facebook
General Intellect Unit on archive.org
Emancipation Network
Thank you, l’m about to listen now! Should I go back and start with part one or can I just jump in?
They are both introductory, but the first part is more focused, this one is more rambly. They haven’t reached the book yet, more discussing author and his ideas in general
So what’s your thoughts?
Took a little break from Hexbear because I found myself getting a little too heated around this particular topic, but I will be back with my thoughts this afternoon. I really appreciate you thinking of me with this and reaching out!
Some random thoughts I jotted down while I was listening to Part 1:
I gotta show my wife this fungi episode they’re referencing.
On representing neurons with binary, I have to admit I struggle with this one as well. I am trying to think of zero as an abstraction of infinity approaching one end of a closed set and one as an abstraction of infinity approaching the other end. We can zoom in on a point in that set for greater specificity, but the further we zoom the less information we have about how that point relates to the rest of the set in that given moment.
What’s counterintuitive is that this is a top-down approach and a bottom up approach at the same time. Zero is defined by its relationship to one and one is defined by its relationship to zero. We don’t have a true measure of distance between zero and one without an additional point existing outside of the set to serve as a frame of reference, but then that creates a new set of zero to one.
I’m not sure where I’m going with all of this, but it’s left me confused in a good way. I’m hoping they dig into this a little more in Part 2. I’m also hoping I have the math literacy to understand it because I didn’t start taking an interest in math until well after I was done taking math classes…
Will these traits be replicated in artificial minds to varying degrees as we begin to develop intelligences for more specialized tasks? And if so, how might that change the way these traits are valued more generally?
I’ve got Part 2 in my queue!
I actually think they are doing slight disservice on how the neurons work in both neural nets and living brain. While you can say single node collapses into 1/0, actually due to large matrix operations its more like 100 neurons collapse into 5.5, 10.7, etc on some following layers neuron. Crucially though, real neurons can have 10000 connections, wildly outstripping forward feeding NN, and they are as you say not so binary.
I feel like those questions are still describing human mind, not ai capabilities. Despite being more positive about ai (its all ml ) as technological achievements, i don’t think they are even scratching the surface of consciousness of a dog, despite dog not being able to speak. They are performing likelihood operations on the whole internet, so if your conversations are with redditors, ai prolly can simulate you whole reddit thread, but not why people saying things they are saying.
And yes anthromorphizing ai is human mind empathy (you can empathize with plush toys for gods sake), that people should reject (i think outright, but maybe just resist).
I feel like chatgpt is close as something like expanse ai, you can talk to it, make it do stuff, but its still dumb as hell.
Its an impressive thing, and will help people, but thinking its your friend is wild
TLDR. Hexbear kneejerk reaction to general ai claims is correct, rejection that it is a fucking impressive thing - is not correct. People getting caught in being empathy machine with a data center deserve our empathy
I agree with you on the empathy issue, but here’s where I hesitate to say it should be rejected outright:
I’ve had some interesting conversations with myself using GPT4 as a sort of funhouse mirror, and even though I recognize that it’s just a distorted reflection… I’d still feel guilty if I were to behave abusively towards it? And I think maybe that’s healthy. We shouldn’t roleplay engaging in abuse without real-world consequences if for no other reason than because it makes us more likely to engage in abuse when there are actual stakes.
In this scenario, the ultimate object of my empathy is my own cognitive projection, but the LLM is still the facilitator through which the empathy happens. While there is a very real danger of getting too caught up in that empathy, isn’t there also a danger in rejecting that empathetic impulse or letting it go unexamined?
The problem as i see it (and im not a psychologist or whatever) is you dont have feeling towards your mirror for example, your brain adapted to your reflection not being a real thing at like 2-3 years.
Brain doesn’t have natural defenses against empathising with llm (even with eliza people were ready to go tell the program their secrets). And feeling aren’t logical (as in, you can know its bullshit and still feel some fulfillment from such conversations). They will (in the podcast) prolly discuss what author thought of that phenomenon with eliza, but i can see on a large scale that being a problem with atomized society, that noticeable amount of people will drop out into llm fantasies.
I don’t think there is a danger in rejecting empathy. I like some plush toys from my childhood, i would be hurt if something happened to them, i wouldn’t hurt them, but i also don’t empathize with them.