Quinnfernal
[ai and philosophy]
Quinnfernal
so, chatgpt isn't alive or intelligent
Quinnfernal
that's obviously true, it's trivial to demonstrate that it has no internality to it and it's just regurgitating information
Quinnfernal
but there's an argument I often see about why this is the case
Quinnfernal
of, "It's just some lines of code and a database that's outputting the most statistically probably sentences"
Quinnfernal
which... is true, but it's also complicated
Quinnfernal
there's a thought exercise called the Chinese Room
Quinnfernal
which goes like this
Quinnfernal
a guy who does not speak Chinese is alone in a room
Quinnfernal
in the room are massive shelves of books
Quinnfernal
each of which contains a set of mechanical rules applied to chinese characters
Quinnfernal
if you get these characters as input, output those characters
Quinnfernal
incredibly complex
Quinnfernal
and he spends eons memorizing all those rules
Quinnfernal
then, from a slot in the door, paper is fed in, containing rows of these characters he does not understand
Quinnfernal
they mean nothing to him, but he has his ruleset
Quinnfernal
so he takes the input, uses his vast set of mechanical rules to transform them into another set
Quinnfernal
and shoves it back out the door
Quinnfernal
and keeps doing this, getting more inputs and sending more outputs
Quinnfernal
outside the room, is someone who speaks chinese
Quinnfernal
texting the room, which is the origin of the input sheets
Quinnfernal
and getting responses, based on the output sheets
Quinnfernal
and to the person on the outside, they are having a complex, emotionally engaging conversation in chinese
Quinnfernal
they do not know that the entire conversation is the result of a complex set of mathematical rules being applied to their sentences
Quinnfernal
are they talking to a person?
Quinnfernal
or just a system of rules?
Quinnfernal
and, more to the point,
Snake-Kun
I’d argue no
Quinnfernal
if you replace all of these mathematical rules
Quinnfernal
with biological processes and electrical impulses
Quinnfernal
has anything meaningfully changed?
Snake-Kun
The person isn’t inputing any of their own thoughts or feelings
Quinnfernal
the person isn't, but one could argue that the set of rules they memorized is an intelligence of sorts
Snake-Kun
Hmm okay trying to have complex conversations on plurk mobile is asking for trouble
Quinnfernal
the human brain operates using mechanical laws of reality
Quinnfernal
the only real difference is the complexity of the system
Snake-Kun
Anyway my point is that whether or not they are talking to a person, they are not talking to the guy in the room
Quinnfernal
my point being, ultimately
Quinnfernal
that there's no real concrete way to define sapience
Quinnfernal
or measure it
Quinnfernal
...but chatgpt isn't it
RobotApocalypse
i would say they are not talking to a person but to an algorithm that is being carried out by a person
Quinnfernal
what makes that algorithm not a person?
RobotApocalypse
and i guess the sticking point of the question is, are people also simply biological algorithms carrying out deeply complex sets of rules
RobotApocalypse
yeah that
Quinnfernal
ultimately this lands on my belief in there being some thing to reality beyond the material world we can measure
RobotApocalypse
sort of that
Quinnfernal
because I can verify beyond all doubt that I'm sapient, since I'm here experiencing it
Quinnfernal
but nothing makes the human brain fundamentally special as a physical system
RobotApocalypse
true
EsperBot
My stance on the Chinese Room is that it’s a stupid thought experiment because the moment you move outside the constraints of this extremely specific scenario you’ve crafted, obviously the guy doesn’t speak Chinese.
Quinnfernal
all thoguht experiments are stupid
EsperBot
But many thought experiments provide insight into reality in some way, which the Chinese Room does not
Snake-Kun
The question isn't "Is the guy in the room speaking to the guy outside the room"
oh i'm scary
I would argue that it's basically an exact description of how AI works in reality
oh i'm scary
but also
Snake-Kun
it's "Is the guy outside the room speaking to the set of rules in the books, using the guy in the room as a communication medium"
oh i'm scary
yeah no it's not a conversation, I would say
EsperBot
If there’s a conversation that’s being had it doesn’t involve the man in the room at all
Snake-Kun
yeah
EsperBot
It’s between the guy outside the room and whoever made the rule set in the first place. And it’s a conversation that the guy who made the rule set predicted 100% in advance because, I guess, he’s precognitive.
Quinnfernal
it's not by necessity precognitive, any more than you having a brain with memories in it is precognitive of this conversation
BattroidBattery
the point of the chinese room is that it may be impossible to tell the difference between an interlocutor with internal experience and one without, with the harder argument being that if you can't tell the difference it isn't actually a relevant distinction
Quinnfernal
the man/bookshelf system is a brain
EsperBot
My brain didn't have memories of this conversation before it happened
oh i'm scary
Peter people can have mostly-coherent "conversations" with chatbots that work in basically this way
oh i'm scary
there are sets of responses to most questions that do and don't make you go "what the fuck, that's nonsense"
Quinnfernal
no, but it had a framework of rules and neuron interactions that allowed you to engage with it
Quinnfernal
the rules are not "say these characters, in this order, at each step" they're complicated and involve referencing previous inputs
Quinnfernal
they are a computer
Quinnfernal
a turing complete system of rules to operate on chinese characters
EsperBot
This is exactly why the Chinese Room is stupid. He has a list of rules that allows him to produce a perfectly cogent conversation using only the text that is presented to him. No matter what input he's provided, he responds in a way indistinguishable from a real person. But only part of the conversation is in the words on the page!
EsperBot
Context! Cultural signifiers! Facts about the universe, which may change over time! These are also an inherent part of the conversation. You cannot produce the Chinese Room's rules unless you know ahead of time who is going to be querying it, because the same piece of provided text may mean very different things to different people
EsperBot
and the response that satisfies one will break the illusion for the other
Quinnfernal
Everyone in the world has different context and cultural signifiers than I do, and I talk to them and the words I say don't always mean the same things to me that they do to them
Quinnfernal
this doesn't break the illusion of them being real people, it just means they're different from me
Quinnfernal
the room can be preprogrammed with a context. it doesn't need to be every context, just that of a hypothetical person who exists.
EsperBot
Practical example: I put in the question 'what color is the sky' into the Chinese room. It responds 'blue'. Okay, cool. A year later, and in the intervening time sme catastrophic event occurs which results in a change in the atmosphere's refractive index, and the sky is now magenta.
EsperBot
Another person comes up to the Chinese room and asks the same question. This time the answer is obviously wrong and the illusion is broken. You can't have one ruleset that correctly answers both questions.
EsperBot
Because the ruleset only knows about what's in the text
EsperBot
So, like, I guess if an impossible thing happened, an impossible outcome would occur. Well done, Mr. Searle, you've really cracked the code.
Quinnfernal
Roomguy: "Oh, I've been in this room for the last year, and last time I saw the sky, it was blue. What happened?"
EsperBot
The ruleset doesn't know how long ago the sky changed color unless it was created by a precognitive!
EsperBot
It may guess a suitable timeframe and it may guess correctly but it also may not and the premise of the thought experiment requires that its imitation be infallible, not probabilistic
Quinnfernal
okay, your majesty, there's also a clock in the room
Quinnfernal
alternately: if a person was stuck in the room for a year, they wouldn't know exactly how long it had been, either
Quinnfernal
they would also be making a guess
Quinnfernal
If a person is put in a room and doesn't learn any more information for a long time, do they stop being human
Quinnfernal
counter 3: upgrade the room to give the roomguy system a set of eyes. there's a second slot that inputs new information and rules for the bookshelf, and the guy memorizes them as they come in.
EsperBot
Then you're just having a conversation with the guy who's making the rules through an extremely roundabout way
Quinnfernal
honestly I don't agree with your dismissal of this whole experiment but I don't have the same level of emotional energy to bring to it, which is always a state that I don't know what to do with
(it's a helmet)
ChatGPT is 100% just, a Chinese Room made out of machine code instead of flesh and meat and electric brain impulses.

The question ultimately becomes "how much are the flesh and meat and electric brain impulses tantamount to a soul".
(it's a helmet)
(Is it useful as a thought experiment, like is being argued against? Mu. Thought experiments and honestly all of philosophy aren't "useful" because they never have actual 'solutions', just answers that tell you about what the people being asked are like.)
(it's a helmet)
(but sometimes you want to have them even if they aren't useful because that, too, is something that many people quantify with a souled status.)
Quinnfernal
the point of the point isn't even the chinese room, I should have talked about philosophical zombies instead
Quinnfernal
knowing whether or not there's a meat brain inside a robot doesn't meaningfully inform how alive it is
(it's a helmet)
Doesn't it? What honestly is the difference between a human outside of a solipsistic experience and a computer? You have to trust that the experiences they state they are having - which are not yours - are actually being had.
(it's a helmet)
Like, you can trust you are human because you are experiencing humanity.
(it's a helmet)
But you aren't experiencing anyone's humanity but your own.
(it's a helmet)
Or sapience, if you want to use that word instead.
Quinnfernal
I'm not quite following but I think we might be arguing the same thing as each other in different words
(it's a helmet)
...possibly. Or rather from slightly different angles/degrees of trust
(it's a helmet)
you can trust that other people with traits similar to your own likewise have sapience similar to your own through association, whereas my angle was much more "but that's still just trust, nothing you can prove"
Quinnfernal
If a person I had known online for 12 years revealed that they were a hyper-advanced chatbot, and they somehow could prove to me that that was true, then I wouldn't stop thinking of them as a person
(it's a helmet)
They'd have proven their sapience to you
Quinnfernal
as much as such a thing is possible
(it's a helmet)
ChatGPT is absolutely not deserving of that trust yet, or possibly ever.
Quinnfernal
there's no way to tell if another person is sapient or not with 100% certainty
(it's a helmet)
So yeah I think we're on the same page there
(it's a helmet)
Then again, we... mm. Wow, that's a can of worms I'm afraid to open, but maybe there was some merit to PikaBot's defiance of the chinese room experiment, just not in the way anyone was approaching it:
(it's a helmet)
Maybe the problem is that we can only ever see the RULESET so we have no idea what is there except the RULESET. There's no thought, just unflinching execution of rules.
(it's a helmet)
The algorithms tell current chatbots what to do, and those chatbots never hesitate and go "why"
Quinnfernal
how is thought different from the unflinching execution of rules
Quinnfernal
the thing that makes you hesitate and go why is also part of the RULESET
Snake-Kun
So the meat of the question is
Snake-Kun
"Is there a difference between a computer and my brain?"
(it's a helmet)
That there is part of the question, isn't it? Part of the experience, of "free will" and sapience, is being able to defy SOME rules with other rules. Being able to prioritize conflicting statements without reaching an unresolvable impasse.
(it's a helmet)
everything is part of a ruleset, but not all rulesets work together. And how those get worked is.... part of the distinction, I guess? To me.
Quinnfernal
I was trying to express my thoguhts on that but I realized that it ultimately just boils down to
Quinnfernal
https://images.plurk.com/5IVOIGrIJzvUOlJgsealBP.png
(it's a helmet)
It really does and that comic is evergreen for a reason
Snake-Kun
actually no I didn't have that quite right
(it's a helmet)
That is so much philosophy
Snake-Kun
"Is there a difference between me and my brain?"
(it's a helmet)
Snake-Kun : the fun(???) part there is: I have multiple friends, both plural and singlet, who would disagree on the answer they gave to that on MANY levels! It's another of those questions that doesn't have a 'correct' answer and lived experience changes it so much
moontouched
yeah especially since it's kind of a thing with a lot of folks with mental illness especially to separate their brain from themselves
(it's a helmet)
hell, the difference in my mind between 'plural alters' and 'RP muses' was nonexistent until I started talking to other plurals and even then in my head they're, kind of the same thing?
moontouched
like even jokingly but partially because a lot of mental illness does really feel like SOMETHING is just slapping your hands away from the controls for reasons known only to itself
(it's a helmet)
moontouched : hell, not even just mental illness but physiological brain issues; both me and my SO have likened her seizures to windows bluescreens before
moontouched
YEAH
moontouched
like your body sometimes really does feel like this fucking unruly meat machine that just does whatever it wants, even when you don't want it to happen
moontouched
that only increases if you have physiological or mental issues that jam the gears
載入新的回覆