Cute Quinn
[ai and philosophy]
Cute Quinn
so, chatgpt isn't alive or intelligent
Cute Quinn
that's obviously true, it's trivial to demonstrate that it has no internality to it and it's just regurgitating information
Cute Quinn
but there's an argument I often see about why this is the case
Cute Quinn
of, "It's just some lines of code and a database that's outputting the most statistically probably sentences"
Cute Quinn
which... is true, but it's also complicated
Cute Quinn
there's a thought exercise called the Chinese Room
Cute Quinn
which goes like this
Cute Quinn
a guy who does not speak Chinese is alone in a room
Cute Quinn
in the room are massive shelves of books
Cute Quinn
each of which contains a set of mechanical rules applied to chinese characters
Cute Quinn
if you get these characters as input, output those characters
Cute Quinn
incredibly complex
Cute Quinn
and he spends eons memorizing all those rules
Cute Quinn
then, from a slot in the door, paper is fed in, containing rows of these characters he does not understand
Cute Quinn
they mean nothing to him, but he has his ruleset
Cute Quinn
so he takes the input, uses his vast set of mechanical rules to transform them into another set
Cute Quinn
and shoves it back out the door
Cute Quinn
and keeps doing this, getting more inputs and sending more outputs
Cute Quinn
outside the room, is someone who speaks chinese
Cute Quinn
texting the room, which is the origin of the input sheets
Cute Quinn
and getting responses, based on the output sheets
Cute Quinn
and to the person on the outside, they are having a complex, emotionally engaging conversation in chinese
Cute Quinn
they do not know that the entire conversation is the result of a complex set of mathematical rules being applied to their sentences
Cute Quinn
are they talking to a person?
Cute Quinn
or just a system of rules?
Cute Quinn
and, more to the point,
BWAAAAAH!
I’d argue no
Cute Quinn
if you replace all of these mathematical rules
Cute Quinn
with biological processes and electrical impulses
Cute Quinn
has anything meaningfully changed?
BWAAAAAH!
The person isn’t inputing any of their own thoughts or feelings
Cute Quinn
the person isn't, but one could argue that the set of rules they memorized is an intelligence of sorts
BWAAAAAH!
Hmm okay trying to have complex conversations on plurk mobile is asking for trouble
Cute Quinn
the human brain operates using mechanical laws of reality
Cute Quinn
the only real difference is the complexity of the system
BWAAAAAH!
Anyway my point is that whether or not they are talking to a person, they are not talking to the guy in the room
Cute Quinn
my point being, ultimately
Cute Quinn
that there's no real concrete way to define sapience
Cute Quinn
or measure it
Cute Quinn
...but chatgpt isn't it
RobotApocalypse
i would say they are not talking to a person but to an algorithm that is being carried out by a person
Cute Quinn
what makes that algorithm not a person?
RobotApocalypse
and i guess the sticking point of the question is, are people also simply biological algorithms carrying out deeply complex sets of rules
RobotApocalypse
yeah that
Cute Quinn
ultimately this lands on my belief in there being some thing to reality beyond the material world we can measure
RobotApocalypse
sort of that
Cute Quinn
because I can verify beyond all doubt that I'm sapient, since I'm here experiencing it
Cute Quinn
but nothing makes the human brain fundamentally special as a physical system
RobotApocalypse
true
EsperBot
My stance on the Chinese Room is that it’s a stupid thought experiment because the moment you move outside the constraints of this extremely specific scenario you’ve crafted, obviously the guy doesn’t speak Chinese.
Cute Quinn
all thoguht experiments are stupid
EsperBot
But many thought experiments provide insight into reality in some way, which the Chinese Room does not
BWAAAAAH!
The question isn't "Is the guy in the room speaking to the guy outside the room"
oh i'm scary
I would argue that it's basically an exact description of how AI works in reality
oh i'm scary
but also
BWAAAAAH!
it's "Is the guy outside the room speaking to the set of rules in the books, using the guy in the room as a communication medium"
oh i'm scary
yeah no it's not a conversation, I would say
EsperBot
If there’s a conversation that’s being had it doesn’t involve the man in the room at all
BWAAAAAH!
yeah
EsperBot
It’s between the guy outside the room and whoever made the rule set in the first place. And it’s a conversation that the guy who made the rule set predicted 100% in advance because, I guess, he’s precognitive.
Cute Quinn
it's not by necessity precognitive, any more than you having a brain with memories in it is precognitive of this conversation
BattroidBattery
the point of the chinese room is that it may be impossible to tell the difference between an interlocutor with internal experience and one without, with the harder argument being that if you can't tell the difference it isn't actually a relevant distinction
Cute Quinn
the man/bookshelf system is a brain
EsperBot
My brain didn't have memories of this conversation before it happened
oh i'm scary
Peter people can have mostly-coherent "conversations" with chatbots that work in basically this way
oh i'm scary
there are sets of responses to most questions that do and don't make you go "what the fuck, that's nonsense"
Cute Quinn
no, but it had a framework of rules and neuron interactions that allowed you to engage with it
Cute Quinn
the rules are not "say these characters, in this order, at each step" they're complicated and involve referencing previous inputs
Cute Quinn
they are a computer
Cute Quinn
a turing complete system of rules to operate on chinese characters
EsperBot
This is exactly why the Chinese Room is stupid. He has a list of rules that allows him to produce a perfectly cogent conversation using only the text that is presented to him. No matter what input he's provided, he responds in a way indistinguishable from a real person. But only part of the conversation is in the words on the page!
EsperBot
Context! Cultural signifiers! Facts about the universe, which may change over time! These are also an inherent part of the conversation. You cannot produce the Chinese Room's rules unless you know ahead of time who is going to be querying it, because the same piece of provided text may mean very different things to different people
EsperBot
and the response that satisfies one will break the illusion for the other
Cute Quinn
Everyone in the world has different context and cultural signifiers than I do, and I talk to them and the words I say don't always mean the same things to me that they do to them
Cute Quinn
this doesn't break the illusion of them being real people, it just means they're different from me
Cute Quinn
the room can be preprogrammed with a context. it doesn't need to be every context, just that of a hypothetical person who exists.
EsperBot
Practical example: I put in the question 'what color is the sky' into the Chinese room. It responds 'blue'. Okay, cool. A year later, and in the intervening time sme catastrophic event occurs which results in a change in the atmosphere's refractive index, and the sky is now magenta.
EsperBot
Another person comes up to the Chinese room and asks the same question. This time the answer is obviously wrong and the illusion is broken. You can't have one ruleset that correctly answers both questions.
EsperBot
Because the ruleset only knows about what's in the text
EsperBot
So, like, I guess if an impossible thing happened, an impossible outcome would occur. Well done, Mr. Searle, you've really cracked the code.
Cute Quinn
Roomguy: "Oh, I've been in this room for the last year, and last time I saw the sky, it was blue. What happened?"
EsperBot
The ruleset doesn't know how long ago the sky changed color unless it was created by a precognitive!
EsperBot
It may guess a suitable timeframe and it may guess correctly but it also may not and the premise of the thought experiment requires that its imitation be infallible, not probabilistic
Cute Quinn
okay, your majesty, there's also a clock in the room
Cute Quinn
alternately: if a person was stuck in the room for a year, they wouldn't know exactly how long it had been, either
Cute Quinn
they would also be making a guess
Cute Quinn
If a person is put in a room and doesn't learn any more information for a long time, do they stop being human
Cute Quinn
counter 3: upgrade the room to give the roomguy system a set of eyes. there's a second slot that inputs new information and rules for the bookshelf, and the guy memorizes them as they come in.
EsperBot
Then you're just having a conversation with the guy who's making the rules through an extremely roundabout way
Cute Quinn
honestly I don't agree with your dismissal of this whole experiment but I don't have the same level of emotional energy to bring to it, which is always a state that I don't know what to do with
the grink
ChatGPT is 100% just, a Chinese Room made out of machine code instead of flesh and meat and electric brain impulses.

The question ultimately becomes "how much are the flesh and meat and electric brain impulses tantamount to a soul".
the grink
(Is it useful as a thought experiment, like is being argued against? Mu. Thought experiments and honestly all of philosophy aren't "useful" because they never have actual 'solutions', just answers that tell you about what the people being asked are like.)
the grink
(but sometimes you want to have them even if they aren't useful because that, too, is something that many people quantify with a souled status.)
Cute Quinn
the point of the point isn't even the chinese room, I should have talked about philosophical zombies instead
Cute Quinn
knowing whether or not there's a meat brain inside a robot doesn't meaningfully inform how alive it is
the grink
Doesn't it? What honestly is the difference between a human outside of a solipsistic experience and a computer? You have to trust that the experiences they state they are having - which are not yours - are actually being had.
the grink
Like, you can trust you are human because you are experiencing humanity.
the grink
But you aren't experiencing anyone's humanity but your own.
the grink
Or sapience, if you want to use that word instead.
Cute Quinn
I'm not quite following but I think we might be arguing the same thing as each other in different words
the grink
...possibly. Or rather from slightly different angles/degrees of trust
the grink
you can trust that other people with traits similar to your own likewise have sapience similar to your own through association, whereas my angle was much more "but that's still just trust, nothing you can prove"
Cute Quinn
If a person I had known online for 12 years revealed that they were a hyper-advanced chatbot, and they somehow could prove to me that that was true, then I wouldn't stop thinking of them as a person
the grink
They'd have proven their sapience to you
Cute Quinn
as much as such a thing is possible
the grink
ChatGPT is absolutely not deserving of that trust yet, or possibly ever.
Cute Quinn
there's no way to tell if another person is sapient or not with 100% certainty
the grink
So yeah I think we're on the same page there
the grink
Then again, we... mm. Wow, that's a can of worms I'm afraid to open, but maybe there was some merit to PikaBot's defiance of the chinese room experiment, just not in the way anyone was approaching it:
the grink
Maybe the problem is that we can only ever see the RULESET so we have no idea what is there except the RULESET. There's no thought, just unflinching execution of rules.
the grink
The algorithms tell current chatbots what to do, and those chatbots never hesitate and go "why"
Cute Quinn
how is thought different from the unflinching execution of rules
Cute Quinn
the thing that makes you hesitate and go why is also part of the RULESET
BWAAAAAH!
So the meat of the question is
BWAAAAAH!
"Is there a difference between a computer and my brain?"
the grink
That there is part of the question, isn't it? Part of the experience, of "free will" and sapience, is being able to defy SOME rules with other rules. Being able to prioritize conflicting statements without reaching an unresolvable impasse.
the grink
everything is part of a ruleset, but not all rulesets work together. And how those get worked is.... part of the distinction, I guess? To me.
Cute Quinn
I was trying to express my thoguhts on that but I realized that it ultimately just boils down to
Cute Quinn
https://images.plurk.com/5IVOIGrIJzvUOlJgsealBP.png
the grink
It really does and that comic is evergreen for a reason
BWAAAAAH!
actually no I didn't have that quite right
the grink
That is so much philosophy
BWAAAAAH!
"Is there a difference between me and my brain?"
the grink
BWAAAAAH! : the fun(???) part there is: I have multiple friends, both plural and singlet, who would disagree on the answer they gave to that on MANY levels! It's another of those questions that doesn't have a 'correct' answer and lived experience changes it so much
moontouched
yeah especially since it's kind of a thing with a lot of folks with mental illness especially to separate their brain from themselves
the grink
hell, the difference in my mind between 'plural alters' and 'RP muses' was nonexistent until I started talking to other plurals and even then in my head they're, kind of the same thing?
moontouched
like even jokingly but partially because a lot of mental illness does really feel like SOMETHING is just slapping your hands away from the controls for reasons known only to itself
the grink
moontouched : hell, not even just mental illness but physiological brain issues; both me and my SO have likened her seizures to windows bluescreens before
moontouched
YEAH
moontouched
like your body sometimes really does feel like this fucking unruly meat machine that just does whatever it wants, even when you don't want it to happen
moontouched
that only increases if you have physiological or mental issues that jam the gears
載入新的回覆