• Carlo Roosen
    200
    AI is becoming a bigger part of our lives, and we all want to understand its consequences. However, it's tough to predict where it's headed. Here, I'll offer my perspective. As a software developer with reasonable experience in AI, I can share my thoughts, though I don't claim to be an expert—so feel free to correct me where I'm wrong. I'll start with a short technical introduction.

    One major breakthrough in AI was the invention of 'Transformers,' introduced in the 2017 paper Attention Is All You Need by eight Google researchers. This paper builds on the attention mechanism proposed by Bahdanau et al. in 2014.

    Transformers allow AI to translate input—any bit of information—into a vector, like an arrow pointing in a specific direction in a multi-dimensional space. Each possible next piece of information also has a vector, and the dot product between these two vectors determines the likelihood of using this piece of information as output.

    The dot product is a straightforward calculation, where the result increases as the vectors align, meaning the more they point in the same direction, the more likely they are to combine. This makes intuitive sense: when two things head in the same direction, they likely belong together.

    There have been other inventions, but OpenAI’s breakthrough with ChatGPT was using all known tech on a massive scale. Enormous data centers and the internet as a knowledge base were key in making this blow up.

    Currently, AI developers are applying these technologies in various fields. Examples include manufacturing robots, automated financial investing, virtual travel booking agents, social media monitoring, and marketing chatbots—the list goes on.

    Software developers are also making their own lives easier. It's no secret you can ask a chatbot to generate code for a website—and it works pretty well. For professional use, tools now create entire software projects. I’ve seen one that has two “agents”: the first is the project manager, it generates a task list based on your needs. The second agent is given those tasks, step by step, to do the actual programming. As a developer, you receive instructions as well, to test and give feedback at each stage. In the end, you have a fully working piece of software without writing a single line of code yourself.

    Similar advancements are happening in other fields, often in even more complex implementations. But they all share one common feature: each neural network performs a single, well-defined task.

    What do I mean by that? Neural networks always need to be wrapped in conventional logic to be useful. For example, self-driving cars use neural networks for vision, but their decision-making still relies on conventional programming (as far as I am aware of). In chess programs, neural networks evaluate board positions, but they rely on traditional minimax tree search to plan their moves.

    These neural networks, in reality, are just functions—input goes in, output comes out. Information flows in one direction. Never a neural network can decide, “I need to think about this a little longer”. A handful of innovations, like these transformers, have made those networks almost "magical," but their limitations are clear to see.

    All predictions about AI's future are based on refining this model—by adding more rules, improving training materials, and using various tricks to approach human-level intelligence.

    But I believe we’re missing something important.

    More than 60 years ago, after the first AI conference in 1956 at Dartmouth College, the idea of neural nets was proposed, inspired by the neurons in our brains. It took time, but here we are—it works. Today, AI developers are busy fine-tuning this single concept, leaving little room to think beyond it. Their focus is on improving the current architecture.

    But one day, I’m certain, we’ll realize there's more to learn from the human mind than just neurons. We can gain insights from observing our minds—how we remember, reason, and use language. Essentially, the kinds of discussions we have here on the forum.

    Sure, we can make those observations, but replicating human thinking in a computer program seems impossible. In conventional programming, we, the developers, determine how the computer interacts with the world. If it needs to learn, we decide how it learns. We impose our human perspective on the machine. We do that in the code that the neural networks are embedded in, and in us training it on human knowledge. For that reason, human-level intelligence is the maximum we can expect to achieve.

    But I see a possibility that does not look too difficult. Our thinking is shaped by language. To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.

    Developing such an architecture for AI to create its own internal language isn’t as difficult as it sounds. The real challenge will be training it. What do we train it on? Truth? Figuring that out will be the biggest hurdle.

    One way or another, well get this done. I'm working on this concept myself. But if I don’t succeed, someone else will. To me, the path is clear: superhuman intelligence is where we're headed.
    What will it look like? That’s impossible to say. Our worldview is limited, not just by what we can perceive, but by the constraints of human language. Take this forum—so many opinions, yet little consensus. Why is it so hard to agree on important topics? Perhaps with a richer language, philosophical questions would have clearer answers. Who can say?
  • Wayfarer
    22.1k
    But one day, I’m certain, we’ll realize there's more to learn from the human mind than just neurons. We can gain insights from observing our minds—how we remember, reason, and use language. Essentially, the kinds of discussions we have here on the forum.Carlo Roosen

    That is meta-cognitive awareness - knowing about knowing, understanding through insight how the mind operates. That might seem obvious but since 20th century psychology came along with the understanding of the sub- and unconscious aspects of the mind, it clear that this not at all simple.

    But overall I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristic. Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. ChatGPT and other LLMs obviously display human-like conversational and knowledgement management abilities and can sail through the Turing Test. But I agree with those who say they are not conscious beings, and never will be, in principle.

    I asked Google Gemini to summarize why Bernardo Kastrup says that the idea of 'conscious AI' is an illusion:

    Bernardo Kastrup's argument against conscious AI is rooted in his philosophical perspective on consciousness and the nature of reality. He primarily argues that:

    1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.

    2. AI as a simulation: He views AI as a simulation of consciousness, rather than a genuine manifestation of it. While AI can exhibit intelligent behavior and even mimic certain aspects of human consciousness, it does so based on programmed rules and algorithms, not on subjective experience.
     
    3. The hard problem of consciousness: Kastrup emphasizes the "hard problem" of consciousness, which is the question of how physical processes can give rise to subjective experience. He argues that current scientific understanding cannot adequately explain this phenomenon, and therefore, it's unlikely that AI, which operates on known physical principles, can achieve it.  

    Essentially, Kastrup's position is that while AI can be incredibly sophisticated and capable, it is fundamentally limited by its physical nature and cannot truly possess the subjective experience that we associate with consciousness.

    See also this blog post.

    I don't submit this just as an appeal to authority, but because Kastrup is a well-known critic of the idea of conscious AI, and because he has doctorates in both philosophy and computer science and created and sold an IT company in the early stages of his career. He has summarized and articulated the reasons why he says AI consciousness is not on the horizon from an informed perspective.

    It might also be of interest that he's nowadays associated with Federico Faggin, an Italian-American computer scientist who has the claim to fame of having built the first commercially-produced microprocessor. Fagin's autobiography was published a couple of years ago as Silicon (website here.) He also described an epiphany about consciousness that he underwent which eventually caused him to retire from IT and concentrate full-time on 'consciousness studies', subject of his later book, Irreducible.

    Noteworthy that both Kastrup and Faggin came to forms of idealist metaphysics because of the realisation that there was an essential quality of consciousness that could never be replicated in silicon.

    There's a lot of philosophical background to this which is often overlooked in the understandable excitement about LLMs. And I've been using ChatGPT every single day since it launched in November 2022, mainly for questions about philosophy and science, but also for all kinds of other things (see this Medium article it helped me draft). So I'm not an AI sceptic in any sense, but I am pretty adamant that AI is not and won't ever be conscious in the sense that living beings are. Which is not to say it isn't a major factor in life and technology going forward.
  • noAxioms
    1.5k
    I don't submit this just as an appeal to authority, but because Kastrup is a well-known critic of the idea of conscious AIWayfarer
    Not sure if Gemini accurately summarized the argument, but there seems to be an obvious hole.

    1. Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness. — GoogleGemini
    But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI. So the argument seems to be not only probably unsound, but invalid, and not just probably. It just simply falls flat.

    People have been trying for years to say that humans are special in the universe. This just seems to be another one. Personally, I don't buy into the whole 'other fundamental property' line, but you know that. But its proponents need to be consistent about the assertions.

    There are big names indeed on both sides of this debate, but I tend to react to arguments and not pedigree. That argument wasn't a very good one, and maybe Gemini just didn't convey it properly.

    Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already. ChatGPT and other LLMs obviously display human-like conversational and knowledgement management abilities and can sail through the Turing Test.Wayfarer
    No chatbot has passed the test, but some dedicated systems specifically designed to pass the test have formally done so. And no, I don't suggest that either a chatbot or whatever it was that passed the test would be considered 'conscious' to even my low standards. It wasn't a test for that. Not sure how such a test would be designed.

    Back to Kastrup: "While AI can exhibit intelligent behavior and even mimic certain aspects of human consciousness, it does so based on programmed rules and algorithms."
    But so are you (if the fundamental property thing is bunk). No, not lines of code, but rules and algorithms nevertheless, which is why either can in principle simulate the other.


    All predictions about AI's future are based on refining this model—by adding more rules, improving training materials, and using various tricks to approach human-level intelligence.Carlo Roosen
    As you seem to realize, that only works for a while. Humans cannot surpass squirrel intelligence only by using squirrels as our training. An no, a human cannot yet pass a squirrel Turing test.

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.

    Chatbots regurgitate all the nonsense that's online, and so much wrongness is out there. Such a poor education. Why can't it answer physics questions from peer reviewed physics textbooks, and ditto with other subjects. But no, it gets so much more training data from say facebook and instagram (I personally don't have social media accounts except for forums like this one), such founts of factual correctness.
  • fishfry
    3.4k
    Software developers are also making their own lives easier. It's no secret you can ask a chatbot to generate code for a website—and it works pretty well.Carlo Roosen

    Recently debunked. Marginal increase in productivity for junior developers, none for seniors. 41% increase in bugs. "Like cutting butter with a chainsaw." It works but then you have to clean up the mess.

    Sorry, GenAI is NOT going to 10x computer programming

    You don't say how long you've been following AI, but the breathless hype has been going since the 1960s. Just a few years ago we were told that radiologists would become obsolete as AI would read x-rays. Hasn't happened. Back in the 1980s it was "expert systems." The idea was to teach computers about the world. Failed. The story of AI is one breathless hype cycle after another, followed by failure.

    The latest technology is somewhat impressive, but even in the past year the progress has tailed off. The LLMs have already eaten all the publicly available text they're ever going to; now they're consuming their own output. When your business model is violating everyone's copyright claims, you have a problem.

    The dot product is a straightforward calculation, where the result increases as the vectors align, meaning the more they point in the same direction, the more likely they are to combine.Carlo Roosen

    Hardly a new idea. Search engines use that technique by dot-producting the word frequency of two articles to see how similar they are.
  • Wayfarer
    22.1k
    But a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe, then so can some other complex physical system such as say an AI.noAxioms

    But it is the inability to describe, explain or account for how physically describable systems are related to the mind, that is what is described in 'facing up to the problem of consciousness'. Our understanding of 'the physical world' is itself reliant on and conditioned by our conscious experience. We perceive and interpret physical phenomena through an experiential lens, which means that consciousness, in that sense, is prior to any understanding of the physical. Trying to explain consciousness in terms of physical processes ultimately involves using concepts that are themselves products of consciousness. Of course it is true that physicalism on the whole won't recognise that, precisely because it supposes that it has excluded the subject from its reckonings, so as to concentrate on what is really there. But that only works up to a point, and that point is well short of explaining the nature of mind. So it's not true that the human body is a 'complex physical system', that is lumpen materialism.

    That argument wasn't a very good one,noAxioms

    I don't think you demonstrate an understanding of it.
  • Wayfarer
    22.1k
    You don't say how long you've been following AI, but the breathless hype has been going since the 1960s. Just a few years ago we were told that radiologists would become obsolete as AI would read x-rays. Hasn't happened. Back in the 1980s it was "expert systems." The idea was to teach computers about the world. Failed. The story of AI is one breathless hype cycle after another, followed by failure.fishfry

    The story is well-told by now [written 2005 about the 70's] how the cocksure dreams of AI researchers crashed during the subsequent years — crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians — and the smallest child — were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd, “to write a program to carry out abstruse formal operations than to capture the common sense of a dog.”

    A dog knows, through its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps — a meaning it experiences all the way down into its muscles and bones. As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector but a sirloin steak probably isn’t appropriate.
    Steve Talbott, Logic, DNA and Poetry
  • Janus
    16.1k
    I don't think you demonstrate an understanding of it.Wayfarer

    If you disagree with an argument it follows that you must not understand it. QED
  • Wayfarer
    22.1k
    If you disagree with an argument it follows that you must not understand it. QEDJanus

    Perhaps then you can parse this sentence for me:

    a human body is nowt but a complex physical system, and if that physical system can interact with this non-physical fundamental property of the universe,noAxioms

    (I take it 'nowt' means 'nothing but'.) So, the objection appears to be, that body is wholly phyhsical, and mind a non-physical fundamental property - which is something very close to Cartesian dualism. But Kastrup's argument is not based on such a model. Hence my remark.
  • Shawn
    13.2k


    Hello, nice to see a computer scientist on the forum. Would you care to comment on some of my thoughts about computing in this thread?

    https://thephilosophyforum.com/discussion/15411/post-turing-processing
  • noAxioms
    1.5k
    Our understanding of 'the physical world' is itself reliant on and conditioned by our conscious experience. We perceive and interpret physical phenomena through an experiential lens, which means that consciousness, in that sense, is prior to any understanding of the physical.Wayfarer
    Well, from an epistemological standpoint, yea, the whole hierarchy is turned more or less around. Data acquisition and information processing become fundamental. What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousness. They probably also forbid the term 'understanding' to whatever occurs when the machine figures it all out.

    But it is the inability to describe, explain or account for how physically describable systems are related to the mindWayfarer
    For a long time they couldn't explain how the sun didn't fall out of the sky, except by inventing something fundamental. Inability to explain is a poor excuse to deny that it is something physical, especially when the alternative has empirically verifiable prediction.

    The descriptions and explanations are very much coming out of neurological research, but there are those that will always wave it away as correlation, not actual consciousness.


    OK, I don't understand Kastrup's argument, since all I had was that one summary not even written by him.


    We seem to be disgressing. Who cares if people consider AI conscious or not. If they can demonstrate higher intelligence, then what name we put to what they do is irrelevant. The trick is to convince the AI that people are conscious since they clearly don't work the way it does.


    Hello, nice to see a computer scientist on the forumShawn
    Ditto greeting from me. I'm one myself, but my latest installation of cygwin for some reason lacks a development environment which stresses me out to no extent. It's like I've been stripped of the ability to speak.
  • Wayfarer
    22.1k
    What you call consciousness is not fundamental since any mechanical device is equally capable of gleaning the workings of the world through such means, and many refuse to call that consciousnessnoAxioms

    If 'gleaning' means 'understanding', then AI systems glean nothing. In fact a computer system knows nothing.

    I put that to gemini too, which responded:

    I agree with your provocative claim that LLMs don't actually know anything. While they can process information and generate text that may seem intelligent, they do not possess true understanding or consciousness.

    Here's why:

    1. Lack of subjective experience: LLMs do not have personal experiences or feelings. They cannot understand the world in the same way that a human does, as they lack subjective consciousness.

    2. Pattern recognition: LLMs are essentially pattern recognition machines. They identify patterns in vast amounts of data and use those patterns to generate text. However, they do not comprehend the meaning behind the information they process.

    3. Manipulation of language: LLMs can manipulate language in impressive ways, but this does not equate to true understanding. They can generate text that is coherent and informative, but they do not have a deep understanding of the concepts they discuss.

    In essence, LLMs are powerful tools that can be used for various purposes, but they should not be mistaken for sentient beings. They are simply machines that can process and generate information based on the data they are trained on.
    — gemini.google.com

    OK, I don't understand Kastrup's argument, since all I had was that one summary not even written by him.noAxioms

    I provided it in the context of the Carlo Roosen's claim that AI will soon give rise to 'superhuman intelligence', by pointing out the objections of Kastrup and Faggin, both computer scientists and philosophers. It was meant as a suggestion for looking into the philosophical issues concerning AI, not as a complete wrap of Kastrup's philosophy. As for Kastrup's books, here's a list if you're interested (and he also has many hours of youtube media).
  • Carlo Roosen
    200
    I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristicWayfarer

    You are right, it is a leap of faith and not a logical conclusion. That leap of faith is the start of all inventions. "We can fly to the moon" has been such a "hubristic" assumption, before we actually did it.

    Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already.Wayfarer

    This quote follows the previous one directly. Do you equate human-level intelligence with consciousness? I do not. I never understand the discussions around consciousness. The consciousness we know ourselves to be, that is the first person experience. But it is not an object. How can we even say other people "have" consciousness, if it is not an object? We can see their eyes open and their interactions with the world. That is a behavioral thing we can define and then we can discuss if computers behave accordingly. Call it consciousness or something else, but it is not the same thing as the awareness of "being me".

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.noAxioms

    Normally your responses read like I could've said it (but yours are better written), but this one I don't understand. Too many negations. Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." Here it breaks. The rest seems to be just a bit complaining. Go ahead, I have that sometimes.

    Recently debunked. Marginal increase in productivityfishfry

    I didn't make that claim. I just said it works pretty well. I know for a fact because I use it. I am not saying it works better than typing out myself, but it allows me to be lazy, which is a quality.

    Hardly a new idea. Search engines use that technique by dot-productingfishfry

    Again, I didn't say that. I just gave a historical overview. Please keep your comments on topic.

    Hello, nice to see a computer scientist on the forum. Would you care to comment on some of my thoughts about computing in this thread?Shawn
    Ditto greeting from me. I'm one myselfnoAxioms

    I don't think I called myself that ;). I updated my bio just now, please have a look. And yes, I will read and comment the article.

    I am very happy to talk to likeminded people here on the forum!
  • Wayfarer
    22.1k
    Do you equate human-level intelligence with consciousness?Carlo Roosen

    Of course human-level intelligence is an aspect of human consciousness. Where else can it be found? What else could it be?

    To me, the path is clear: superhuman intelligence is where we're headed.Carlo Roosen

    But not associated with consciousness?

    What do you mean by 'human level intelligence' and 'superhuman inteligence'?
  • Carlo Roosen
    200
    Intelligence can be defined, consciousness not. It is our own personal experience. I cannot know you are conscious, I assume it because you are human (I believe). I don't understand this whole discussion and try to stay away from it.
  • MoK
    370
    Consciousness is fundamental: Kastrup believes that consciousness is a fundamental property of the universe, not a product of complex physical systems like the human brain. This means that AI, which is a product of human design and operates on physical principles, cannot inherently possess consciousness.Wayfarer
    Well, that seems contradictory to me. Everything should be conscious if consciousness is a fundamental property of the universe. So a computer that simulates intelligence is also conscious. What is its subjective experience is however the subject of discussion. Its subjective experience could be simple low-level that allows the computer to run the code. I highly doubt that its subjective experience is high-level such as thoughts though even if its behavior indicates that it is intelligent.
  • Wayfarer
    22.1k
    Intelligence can be definedCarlo Roosen

    Well, go ahead, define it. You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up.

    Do some research - google Bernardo Kastrup and read or listen. I’m not going to try and explain what he says but happy to answer any questions that throws up if I’m able.
  • MoK
    370
    Do you agree that his statement is contradictory? He stated that consciousness is a fundamental aspect of the universe yet he claims that computer is not conscious.

    Do some research.Wayfarer
    On which topic?
  • Wayfarer
    22.1k
    Do you agree that his statement is contradictory? He stated that consciousness is a fundamental aspect of the universe yet he claims that computer is not conscious.MoK

    Read up on Bernardo Kastrup. I can’t break it down for you in a forum post. Try this https://besharamagazine.org/science-technology/mind-over-matter/
  • MoK
    370
    Read up on Bernardo Kastrup. I can’t break it down for you in a forum post. Try this https://besharamagazine.org/science-technology/mind-over-matter/Wayfarer
    I read the article. It does not explain what he means by that consciousness is a fundamental aspect of the universe.
  • Carlo Roosen
    200
    Let's keep it constructive.

    Intelligence can be defined. For practical purposes, we have IQ tests to measure it. For animal intelligence, we have tests to find out if an animal uses tools, whithout it being learned behavior or instinct. For super human intelligence we might need some discussion to define a suitable test, but it will be related to the ability to solve complex problems.

    You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up.Wayfarer

    The first one I said it was the maximum achievable in the current architecture. The second one was a leap of faith, I already explained that.
  • Carlo Roosen
    200
    Consciousness, on the other hand, I see as something that you can only confirm for yourself "hey, I exist! I can feel the wind in my hair" This realisation comes before the words, you don't have to say these words to yourself do know you are conscious.

    I cannot say that for somebody else. I can describe it, but not in a way that we can call it a definition, because it is circular.
  • Baden
    16.1k


    His critique of materialism isn't hard to agree with. Materialism does posit, ultimately, mathematical abstractions at the bottom of everything and ignores consciousness. But Kastrup's idealism--as expressed in that article--fares no better in that it posits consciousness as fundamental as a solution to ignoring it, but with no real insight into how it interacts with or why it's necessary to interact with matter in order to produce human experience. Or why human experience, which is the origin of the concept of "consciousness", is so special such that this concept turns out to the most fundamental map of the big picture. So, we're left without the only pieces of the puzzle that actually matter.

    And necessarily so. Language is built for us to navigate and create physical, psychological, and social realities, not to express "fundamental reality", which is just that which is posited to be beyond the contexts in which linguistic meaning has practical consequence. So, we can run the discomfiting materialist script or the more comforting idealism script and pretend that they refer to something out there, but functionally the only difference between them is the emotional cadence. Linguistic relevance simply disappates into nothingness at the boundaries of such fundamental abstraction. Materially, we get symbolic mathematical interactions that don't refer directly to any observable phenomenon (i.e. empty abstractions that create holes for objective physical realities like the Higg's Boson to slot into) vs mentally, we get "fundamental consciousness" (an empty abstraction that creates a hole for the subjective mental reality of human experience to slot into).

    Neither script solves any problem nor points to any actionable goal. It just adds another linguistic patina to our already overburdened social consciousness. Take them or leave them, materialism and idealism boil down to the same thing, fruitless stories aimed at elevating their storytellers into something they're not nor ever can be, i.e. vessels of wisdom that point to anything of actual significance beyond scientific progress and lived human experience. These are the true limits of our objective and subjective worlds and an admission of such is necessary for the development of any intellectually honest metaphysical position.
  • Wayfarer
    22.1k
    His critique of materialism isn't hard to agree with. Materialism does posit, ultimately, mathematical abstractions at the bottom of everything and ignores consciousness. But Kastrup's idealism--as expressed in that article--fares no better in that it posits consciousness as fundamental as a solution to ignoring it, but with no real insight into how it interacts with or why it's necessary to interact with matter in order to produce human experience. Or why human experience, which is the origin of the concept of "consciousness", is so special such that this concept turns out to the most fundamental map of the big picture. So, we're left without the only pieces of the puzzle that actually matter.Baden

    Hey, thanks for that feedback! As has been pointed out already, that abstract that you're reacting to was AI generated, for the purpose of criticism of one of the claims in the OP, namely, that we will soon produce 'human-level intelligence' (or even superhuman, whatever that's supposed to mean.) So it's very cursory. Kastrup does address those points you raise in great detail in his various books, articles and lectures. He has found that his idealist philosophy is convergent in many respects with Schopenhauer's (hence his book on that), and from my reading, he has produced a comprehensive idealist metaphysics, although I won't try and respond to all of your points in a single post. If you're interested, he has a free course on it.

    Take them or leave them, materialism and idealism boil down to the same thing, fruitless stories aimed at elevating their storytellers into something they're not nor ever can be, i.e. vessels of wisdom that point to anything of actual significance beyond scientific progress and lived human experience.Baden

    I have more confidence in philosophy as a vehicle for truth.
  • Baden
    16.1k
    Kastrup does address those points you raise in great detail in his various books, articles and lectures.Wayfarer

    I'll take a further look.

    I have more confidence in philosophy as a vehicle for truth.Wayfarer

    My position is a philosophical one.
  • Baden
    16.1k
    But yes, I don't believe we ever get to big T "Truth". Some sort of linguistic stability though, yes. It's more a negative type of progress of substraction.
  • Carlo Roosen
    200
    I love to discuss this topic, but not here. Is there a way to turn the level of pragmatism up a bit, so we can get a better insight in the direction AI is going? My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence.
  • Baden
    16.1k


    You're right. It's off-topic here.
  • Wayfarer
    22.1k
    My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence.Carlo Roosen

    You have yet to explain how intelligence can be dissociated from consciousness. You might say that AI does this, but as noted above, AI systems don't actually know anything, so the question must be asked if you think they are capable of 'human-level intelligence' in light of that lack. So my objection may appear off topic to you, but maybe that's because you're not seeing the problem. It might be that you have a false picture of what AI is and can do.

    // I was reading yesterday that IBM's Deep Blue, which famously beat Gary Kasparov at chess in 1996, doesn't actually know what 'chess' is, doesn't know what 'a game' is, and doesn't know what 'winning' means. It simply performs calculations so as to derive an outcome.//
  • Carlo Roosen
    200
    Yes I don't think that is off topic, I'd like to discuss that further. But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say?

    Currently, in ChatGPT, we can see SOME level of intelligence. Same with chess programs. And at the same time we see they are not conscious, I do fully agree with you that it are "just calculations".

    Intelligence can be defined and measured, that is what I said. If at some point the computer can contribute in a pro-active way to all major world problems, and at the same time help your kid with his homework, wouldn't you agree it has super-human intelligence? And still, it is "just calculations".

    To reach this point, however, I believe those calculations must somehow emerge from complexity, similar to how it has emerged in our brains. The essential thing is NOT to let it be commanded by how we humans think.
  • Wayfarer
    22.1k
    But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say?Carlo Roosen

    But aren’t they always connected? Can you provide an example of where they’re not?

    And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.

    So what ‘human level intelligence’ means is still an open question (let alone ‘superhuman’).

    To me, it’s clear that if computers had their own language, one they could develop for themselves, they would form a worldview independent from ours. Once we implement this language in a neural network, it will be flexible enough to learn things we can’t even imagine.Carlo Roosen

    You’re assuming a lot there! Have a look at this dialogue from a few days back
  • Carlo Roosen
    200
    But aren’t they always connected? Can you provide an example of where they’re not?Wayfarer
    I already did. Chess programs and ChatGPT. They have some level of intelligence, that is why we call it AI. And they have no conciousness, I agree with you on that.

    You’re assuming a lot there!Wayfarer
    Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.

    And yes, I admit it is a leap of faith.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.