However, once the conversation ends,I do not retainGPT4 does not retain any information about it, andcannot recallit cannot be recalled in future conversations.
At this point, the only observation I have is to wonder about the soundness of programming the system to use the first-person pronoun, ‘I’ — Wayfarer
While Rödl's framework may be useful in identifying some parallels between human self-consciousness and AI-generated behavior, the fundamental differences between human cognition and AI processing should not be overlooked. — GPT4
Especially impressive was the ‘now you mention it’ response. Spooky. — Wayfarer
what their performances can teach us (by ways of analogies and disanalogies) about the nature of our own mental abilities. — Pierre-Normand
In this conversation, the user (PN) and the AI (GPT4) discuss the movie Memento and its protagonist, Leonard Shelby, who suffers from anterograde amnesia. GPT4 explains the film's unique narrative structure and its importance in Christopher Nolan's filmography. The user points out a parallel between Shelby's cognitive ailment and GPT4's memory deficit. Initially, GPT4 discusses the differences between their memory limitations, but the user clarifies that the parallel is about how Shelby uses notes, photographs, and tattoos to carry forward new memories and how GPT4 can "remember" information within a single conversation using its token window. GPT4 acknowledges the similarity and highlights the strategies both Shelby and the AI use to cope with their respective memory limitations.
It is indeed amazing to me how succinct, yet self-reflective the response is. I use ChatGPT everyday for work and pleasure. It's amazing how good it is. — Bret Bernhoft
This is similar to how congenitally blind individuals can discuss the visual appearance of things using abstract concepts and language, despite not having direct visual experiences themselves." — GPT4
Thereby not only blowing the Turing Test out of the water, but also Mary's Room. :lol: — Wayfarer
The trouble with LLMs, is that they don't grasp the computational complexity of a question; they produce instant copy-pasta in the same amount of time regardless of the difficulty of the question, and without providing a measure of confidence in their answer.
For example, when asking an LLM a simple variant of a well known logic problem, if that variant isn't in it's training set, it will likely respond quickly with a wrong answer, rather than returning "mmmm......." and thinking further to produce the correct answer, or to return "don't know". — sime
Those models had a strong tendency to hallucinate. GPT4 also hallucinates, sometimes, but its tendency to do so is sharply reduced compared to previous models — Pierre-Normand
In terms of being an LLM, any apparent improvements are merely down to increasing the size of the training set on which GPT 4 was pretuned (together with increased architectural capacity). If we are referring to it's innate ability to generalise correctly with respect to unseen inputs, then it is certainly not the case that it's tendencies to be correct have improved, due to the fact that it isn't a reasoner that can prove or verify theorems the hard way. — sime
Unlike many philosophical discussion with fellow human beings, where we often end up having to agree to disagree, in this case, GPT4 and I regrettably had to agree to agree! — Pierre-Normand
On edit: I now realise that GPT4 starts to hallucinate and to provide some circular reasoning towards the end of his proof of the Pythagorean theorem. I'm going to press him further about that tonight. — Pierre-Normand
Certainly! Let's restate the Pythagorean theorem in the language of Cartesian geometry and then supply an arithmetic proof.
The Pythagorean theorem states that for a right-angled triangle with sides of length a, b, and c (where c is the hypotenuse), the following relationship holds:
a² + b² = c²
Now let's translate this into Cartesian geometry. We'll consider a right-angled triangle with vertices A, B, and C, where:
A = (0, 0)
B = (a, 0)
C = (0, b)
The side lengths are as follows:
AB = a
AC = b
BC = c
Since AB and AC are horizontal and vertical sides, respectively, the angle BAC is a right angle.
Using the distance formula for Cartesian coordinates, we can calculate the length of side BC:
BC = √((a - 0)² + (b - 0)²) = √(a² + b²)
Now, let's restate the Pythagorean theorem using the Cartesian coordinates:
AB² + AC² = BC²
Substitute the lengths of the sides:
a² + b² = ( √(a² + b²) )²
Simplifying:
a² + b² = a² + b²
Thus, the Pythagorean theorem holds true when translated into Cartesian geometry and expressed in the language of arithmetic. — GPT4
So, while the development of these programs is promising, following them too closely can cause one to believe that what the AI does is what human beings do, and it doesn't follow. I hope it manages to develop in a manner which is not damaging to the current climate of misinformation we are living in. — Manuel
The human mind is different, it forms extremely rich ideas from quite poor sense-stimuli, and forms hypothesis that are often quite substantially underdetermined given the evidence. — Manuel
While it may sometimes be a mistake to anthropomorphize the machine on the basis of a superficial similarity between its behavior and human behavior, it may likewise be a liability to dismiss the remarkable cognitive abilities that it clearly exhibits on the ground that they are being achieved by seemingly "mechanical" means at a lower level of analysis or implementation. — Pierre-Normand
I hope it manages to develop in a manner which is not damaging to the current climate of misinformation we are living in. — Manuel
It seems to me that both systems are stupid the more one zooms in. Do you know Dennett's
termite mound as a memetic host metaphor ? — green flag
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.