It sounds like I should read some Brandom. Any pointers on where to start? — wonderer1
It sounds like I should read some Brandom. Any pointers on where to start? — wonderer1
I can see the day (and it's not far off) when the entire techo-landscape is accessed a single interface — Wayfarer
It sounds like you might appreciate The Neural Basis of Free Will: Criterial Causation by Peter Tse. — wonderer1
Opus read my mind and understood everything that I was driving at. Llama 3 does sound like a stochastic parrot; Claude 3 not at all. — Pierre-Normand
Ha! It speculates about how it answered the question. — frank
It indeed does! Our introspective abilities to tell after the fact my means of which mental means we arrived at answers to question also are fallible. In the case of LLMs, a lack of episodic memories associated with their mental acts as well as a limited ability to plan ahead generate specific modes of fallibility in that regard. But they do have some ability (albeit fallible) to state what inferences grounded their answers to their user's query. I've explored this in earlier discussion with Claude and GPT-4 under the rubric "knowledge from spontaneity": the sort of knowledge that someone has of their own beliefs and intentions, which stems from the very same ability that they have to rationally form them. — Pierre-Normand
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.