The funniest thing is that, instead of acknowledging the error, it tries to cover for it. I'm not sure why it should do that -- except that it's imitating humans doing it.
This is actually what LLMs do best, at its best. It can create beautifully plausible prose to explain even the most nonsensical and asinine things as if it's obvious.
@TerryHancock the GPT has no conception of acknowledging or covering an error, or even the concept of an error. It's just stringing words together in a way that sounds good.
But if the codex contained many people acknowledging mistakes, then surely it should do this?
ISTM that it is imitating human hubris. It's funny, partly because we expect more integrity from machines, but also (to me) because it seems to reflect a human weakness.
I got a similar result. But could get back to the wrong results when "pressing" ChatGPT that its answer was wrong. infosec.exchange/@realn2s/1146โฆ
Actually, I find the different results even more worrying. A consistent error could be "fixed" but random error are much harder or impossible to fix (especially if they are an inherent propertiies of the system/LLMs)
I assume the guy who came up with the stochastic parrot metaphor is very embarrassed by it by now. I would be.
(Completely ignoring the deep concept building that those multi-layered networks do when learning from vast datasets, so they stochastically work on complex concepts that we may not even understand, but yes, parrot.)
I very much doubt she is - will leave it to you as an exercise to discover why. Are you aware what the word stochastic means? @Mastokarl @realn2s @JonathanGulbrandsen @oli @cstross
But you're evidently gullible enough to have fallen for the grifter's proposition that the text strings emerging from a stochastic parrot relate to anything other than the text strings that went into it in the first place: we've successfully implemented Searle's Chinese Room, not an embodied intelligence.
no, I just argue that the concept formation that happens in deep neural nets is responsible for the LLM's astonishingly "intelligent" answers. And the slur "parrot" is not doing the nets justice.
personally, and yes, I'm influenced by Sapolsky's great work, I believe we humans are not more than a similar network with a badly flawed logic add-on and an explanation component we call consciousness and a believe in magic that we are more than that.
Internally they're fundamentally different from Eliza, but they definitely exhibit the Eliza *affect*, that is, they create the illusion that the user is in conversation with something that has a theory of mind (when in fact they don't).
I'm not sure of the "difference" Different in pure dimension for sure (molehill vs mountain).
On a higher level:
ELIZA used keywords with a rank which, together with the relations to the output sequences were hardcoded in the source.
LLM use tokens with a probability which, together with the relations to the output tokes sequences are determined though training data
Closing with a anecdote from the wiki page:
Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
Thought about this some more. No, I do not think that using the Chinese room thought experiment you will ever accept a computer programโs behavior as intelligent, no matter how much evidence for its intelligent behavior you get. Because per definition thereโs an algorithm executing it.
I donโt agree, because I donโt buy into the human exceptionalism that we meat machines have some magic inside of us that gives us intent the machines canโt have.
$ python3 Python 3.13.3 (main, Apr 10 2025, 21:38:51) [GCC 14.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> 9.11 - 9.9 -0.7900000000000009
That's closer than 0.21, but it still isn't correct. You need decimal arithmetic to get the correct answer:
jshell> new BigDecimal("9.11").subtract(new BigDecimal("9.9")) $1 ==> -0.79
Questo sito utilizza cookie per riconosce gli utenti loggati e quelli che tornano a visitare. Proseguendo la navigazione su questo sito, accetti l'utilizzo di questi cookie.
Space Catitude ๐
in reply to D. Olifant • • •Kenneth
in reply to D. Olifant • • •This is actually what LLMs do best, at its best. It can create beautifully plausible prose to explain even the most nonsensical and asinine things as if it's obvious.
@TerryHancock the GPT has no conception of acknowledging or covering an error, or even the concept of an error. It's just stringing words together in a way that sounds good.
Space Catitude ๐
in reply to Kenneth • • •It's meeting an expectation, based on the codex.
But if the codex contained many people acknowledging mistakes, then surely it should do this?
ISTM that it is imitating human hubris. It's funny, partly because we expect more integrity from machines, but also (to me) because it seems to reflect a human weakness.
Jonathan Gulbrandsen
in reply to D. Olifant • • •Mastokarl ๐บ๐ฆ
in reply to Jonathan Gulbrandsen • • •Claudius Link
in reply to Mastokarl ๐บ๐ฆ • • •I got a similar result. But could get back to the wrong results when "pressing" ChatGPT that its answer was wrong.
infosec.exchange/@realn2s/1146โฆ
Actually, I find the different results even more worrying. A consistent error could be "fixed" but random error are much harder or impossible to fix (especially if they are an inherent propertiies of the system/LLMs)
Claudius Link
2025-06-05 06:51:09
Osma A ๐ซ๐ฎ๐บ๐ฆ
in reply to Claudius Link • • •@realn2s @Mastokarl @JonathanGulbrandsen @oli @cstross
Mastokarl ๐บ๐ฆ
in reply to Osma A ๐ซ๐ฎ๐บ๐ฆ • • •I assume the guy who came up with the stochastic parrot metaphor is very embarrassed by it by now. I would be.
(Completely ignoring the deep concept building that those multi-layered networks do when learning from vast datasets, so they stochastically work on complex concepts that we may not even understand, but yes, parrot.)
Osma A ๐ซ๐ฎ๐บ๐ฆ
in reply to Mastokarl ๐บ๐ฆ • • •@Mastokarl @realn2s @JonathanGulbrandsen @oli @cstross
Mastokarl ๐บ๐ฆ
in reply to Osma A ๐ซ๐ฎ๐บ๐ฆ • • •Charlie Stross
in reply to Mastokarl ๐บ๐ฆ • • •But you're evidently gullible enough to have fallen for the grifter's proposition that the text strings emerging from a stochastic parrot relate to anything other than the text strings that went into it in the first place: we've successfully implemented Searle's Chinese Room, not an embodied intelligence.
en.wikipedia.org/wiki/Chinese_โฆ
(To clarify: I think that a general artificial intelligence might be possible in principle: but this ain't it.)
thought experiment arguing that a computer cannot exhibit "understanding"
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Mastokarl ๐บ๐ฆ
in reply to Charlie Stross • • •no, I just argue that the concept formation that happens in deep neural nets is responsible for the LLM's astonishingly "intelligent" answers. And the slur "parrot" is not doing the nets justice.
personally, and yes, I'm influenced by Sapolsky's great work, I believe we humans are not more than a similar network with a badly flawed logic add-on and an explanation component we call consciousness and a believe in magic that we are more than that.
Claudius Link
in reply to Charlie Stross • • •Agree. I'm more and more convinced that today's chatbots are just an advanced version of ELIZA, fooling the users and just appearing intelligent
en.wikipedia.org/wiki/ELIZA
I wrote a thread about it infosec.exchange/@realn2s/1117โฆ
where @dentangle fooled me using the ELIZA technics
Claudius Link (@realn2s@infosec.exchange)
Infosec ExchangeCharlie Stross
in reply to Claudius Link • • •Claudius Link
in reply to Charlie Stross • • •I'm not sure of the "difference"
Different in pure dimension for sure (molehill vs mountain).
On a higher level:
ELIZA used keywords with a rank which, together with the relations to the output sequences were hardcoded in the source.
LLM use tokens with a probability which, together with the relations to the output tokes sequences are determined though training data
Closing with a anecdote from the wiki page:
Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."
Mastokarl ๐บ๐ฆ
in reply to Charlie Stross • • •Thought about this some more. No, I do not think that using the Chinese room thought experiment you will ever accept a computer programโs behavior as intelligent, no matter how much evidence for its intelligent behavior you get. Because per definition thereโs an algorithm executing it.
I donโt agree, because I donโt buy into the human exceptionalism that we meat machines have some magic inside of us that gives us intent the machines canโt have.
argv minus one
in reply to D. Olifant • • •$ python3
Python 3.13.3 (main, Apr 10 2025, 21:38:51) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 9.11 - 9.9
-0.7900000000000009
That's closer than 0.21, but it still isn't correct. You need decimal arithmetic to get the correct answer:
jshell> new BigDecimal("9.11").subtract(new BigDecimal("9.9"))
$1 ==> -0.79
Claudius Link
in reply to argv minus one • • •I'm probably trying to approach this the wrong way (trying to understand the cause of this error)
I don't get where the 0.21 result is coming from ๐คฏ
Claudius Link
in reply to Claudius Link • • •Just for fun i asked ChatGPT the same question and now the answer is "correct" (it was wrong but it "corrected" itself)
Funny enough, when pressing it that it was wrong and the right answer was 0.21 I got this