Comments

  • Doubt, free decision, and mind
    So, why can't it be part of a deterministic system? The code example I gave is deterministic.
  • Doubt, free decision, and mind

    define x
    for a = 1 to 1000000000
    y = y + 1 / a
    next a
    x = 2 + y

    x is "in doubt" while calculating y
  • Doubt, free decision, and mind
    I don't agree. "in doubt" can also be a state.
  • Post-Turing Processing
    My point is that you are not specific enough. You'll need to define more precisely what you are doing. Including some calculations of the processing time and memory demands.

    Now it sounds a bit like "could we use a generator to stop a truck instead of normal breaks, and reuse the energy?" - probably yes, but why aren't they doing it everywhere?

    LLM's for instance require a randomizer. In fact, after reading this remark I'll change "My point is that you are not specific enough." to "You seem to be dreaming"
  • Where is AI heading?
    My hypothesis is that language plays a key role in thinking. With "I love sushi" I have some debate about that, there are people without language abilities that still show intelligence. So many sides to the topic...

    I believe that if we let computers develop their own internal language, it starts to "think" independently of us. It will invent its own conceptual models of its surroundings that may be different from ours. Given the low bandwidth of human language, a computer should be able to think faster and broader than us.
  • Doubt, free decision, and mind
    By doubt I mean an experience of uncertainty in a situation.MoK

    I went back to your definition in the OP, and based on that, of course, I have doubts. Right now, for example: Should I respond to your post and have my name appear two or three times on the homepage? Some people already say I post too often.

    What I do next is become still, stopping my thoughts. (Since you're interested in free will, my choice to become still is a learned behavior—I’ve learned that thinking doesn’t resolve these questions.) In this case, an answer comes to me quickly and clearly: yes, I should post this response. Only after that does the reasoning behind it come to me. It works like a logical process, but in reverse.

    Then, of course, your question is: where does that first 'yes' come from? Is there such a thing as a free mind?

    I believe even a deterministic system can have 'free will', at least in some sense. This is because our conceptual understanding of deterministic systems is limited. A deterministic system as complex as the brain can be understood at the component level (neurons in this case), but the emergent behavior that arises operates on a different level of understanding, with no direct logical connection between the two. Yes, the connection exists, but conceptually, we can’t fully grasp it. It’s where we have to say 'stop' to conceptual thinking, much like division by zero is not allowed in math.

    So, an answer comes, and I don’t know from where. Is it a free mind? Concepts play tricks on us here. For instance, is it possible to choose the opposite of what you actually chose? If not, how can it be free will? I don’t let those concepts fool me—that’s how I deal with it.

    Finally, to clarify why I said I don’t have doubts: for me, doubt comes with a feeling of unease. In what I just described, I didn’t feel uneasy, so personally, I wouldn’t call it doubt
  • Where is AI heading?
    ChatGPT: The question of whether artificial intelligence (AI) is "actually intelligent" hinges on how we define "intelligence."

    That says it all
  • Where is AI heading?
    I think we're getting close to an agreement on the topic. I am talking about a pragmatic definition of intelligence, you are talking about an understanding that implies awareness.

    I am not even opposing you, I DO believe with intelligence also comes consciousness. I just want to keep it outside the discussion here because there is too much to say about it. I will address these broader implications as well, later. My earlier post of conceptual versus fundamental reality is an important part of this discussion. However, if you can find a broader description of the topic that will not wander off into infinite complexity, I am open to that.

    Questioning the overal aim of such an AI is the whole purpose of me being here on the forum, I am as curious as you. We might come to a conclusion that we should never build the thing.
  • Where is AI heading?
    And can intelligence really be defined and measured? I suppose it can be in some respects, but there are different modes of intelligence. A subject may have high intelligence in a particular skill and be deficient in other areas.Wayfarer

    But you'll agree with me that intelligence is visible, where consciousness is not. Generally we will agree on the level of intelligence we observe. To make it truely defined and measurable, yes there is a challenge, but I don't see why it would be impossible. We've done it for humans and animals.

    Where consciousness is really asking for an internal awareness. I cannot even prove my brother is conscious, as in, I do not have access to his consciousness directly, I can only infer.
  • Where is AI heading?
    But aren’t they always connected? Can you provide an example of where they’re not?Wayfarer
    I already did. Chess programs and ChatGPT. They have some level of intelligence, that is why we call it AI. And they have no conciousness, I agree with you on that.

    You’re assuming a lot there!Wayfarer
    Yes, my challenge is that currently everybody sticks to one type of architecture: a neural net surrounded by human-written code, forcing that neural net to find answers in line with our worldview. Nobody has even time to look at alternatives. Or rather, it takes a free view on the matter to see that an alternative is possible. I hope to find a few open minds here on the forum.

    And yes, I admit it is a leap of faith.
  • Where is AI heading?
    Yes I don't think that is off topic, I'd like to discuss that further. But isn't the burden of proof on you, to prove that intelligence and consciousness are connected, as you say?

    Currently, in ChatGPT, we can see SOME level of intelligence. Same with chess programs. And at the same time we see they are not conscious, I do fully agree with you that it are "just calculations".

    Intelligence can be defined and measured, that is what I said. If at some point the computer can contribute in a pro-active way to all major world problems, and at the same time help your kid with his homework, wouldn't you agree it has super-human intelligence? And still, it is "just calculations".

    To reach this point, however, I believe those calculations must somehow emerge from complexity, similar to how it has emerged in our brains. The essential thing is NOT to let it be commanded by how we humans think.
  • Doubt, free decision, and mind
    Yes, good catch, maybe I am pretending I have no doubts but actually I do have them. Have to think about that, I'll come back to you.
  • Post-Turing Processing
    Next step is to work it out in a table, a diagram or in pseudocode, with the number of bits for each step. Maybe you found some magical loophole, but I believe you made a logical error somewhere. I am unable to give more feedback without more details.

    Also, please comment on the rest of my answer because you leave a lot of things unclear. Do you want to capture the full memory state of a computer at every clock cycle? If not, what do you select and based on what?
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    With our current understanding of science, we can't.Philosophim

    I believe this is the point Skalidris is making: it is not about the advances in science. Even defining consciousness leads to problems.

    I personally believe it is even more simple, we are not talking about the same "thing". Any thought experiment you try will fail on me, because you are not talking about the sense of being conscious, but about the content of that consciousness. For me consciousness is the 'container'. The only way to access it directly is by 10 seconds of non-thinking.
  • Where is AI heading?
    I love to discuss this topic, but not here. Is there a way to turn the level of pragmatism up a bit, so we can get a better insight in the direction AI is going? My suggestion was to ignore the topic of consciousness here, but maybe that doesn't work. Especially not if one, like Wayfarer, equates consciousness with intelligence.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    Well, I'm also talking about the " first person experience", and people who explore the hard problem of consciousness are also talking about this, aren't they?Skalidris

    1 yes and 2 no. I believe the two of us are talking about the same thing. But most people, and especially everybody I've seen here so far, seems to talk about consciousness as an object. To me that implies they are talking about something different than I am. Wayfarer yesterday jumped from intelligence to consciousness as if it is the same thing. If two people are not talking about the same thing, no logical argument makes sense.

    I wonder if most people ever have tried to simply be aware of their own consciousness (yes, that IS circular). Most people, and especially here on the forum, are so caught up in thinking that they only can have a conceptual understanding of consciousness. No wonder you never can say anything sensible about the topic.
  • Doubt, free decision, and mind
    No question is simple here. I explained what I call a doubt. I don't have a doubt, and I do not always have answers.
  • Doubt, free decision, and mind
    Haha I don't know how I should read your question. Do you mean that I sound so confident that I would never have a doubt? I have been called arrogant here. I don't think you mean it this way ;)

    I do not have all answers, and I accept the fact that I don't. If I need to make a decision, I do some thinking, but generally I do not make decisions based on thinking alone, I also follow my intuition, so to speak. And at some point often an insight comes.

    So either I don't know, or I know. I don't have the experience of "making a choice". Eckhart Tolle (Power of Now) calls the free will an illusion of the mind, that is another way to look at it. But you can only see it that way if your mind is a bit relaxed ;) .
  • Post-Turing Processing
    You asked me to give my thoughts. Others have already confirmed, yes it is possible and it has a name. Also it has applications, the
    math coprocessorwonderer1
    being the most recognizable for me.

    The pragmatic challenge seems to me to find these tasks that can be separated out, that can be defined independently of the rest. The math coprocessor is useful because these math functions have these cut-out function calls and they happen relatively often.

    I am planning a follow-up on my topic "conceptual reality versus fundamental reality" where I will use the Game of Life as an example (the 2D cellular automaton). Here I'll describe something that resembles what describe, in an accesible form. I have a formal description of 2 objects in the GoL, a "glider" and the "Gosper Glider Gun". Basically, if you know the start states, you do not have to meticulously calculate all the pixels at each step. You can just calculate the state at any moment in time. Except when there is a collison. You can add a collison detection, and continue on a pixel by pixel basis after that. The trick is to find those patterns in the first place.

    I had to think about this becasue I've been working on it recently. I am not sure if it helps.

    The most general feedback on your post is, what is your interest in this topic, and where does this interest lie on a scale from theoretical to practical?

    [edit] Reading your post again, you want to store the complete state? But in a 1kB memory, there are 2^(1kB) possible states. So in any new state, you want to search your harddisk to see if you have done this already? This explodes.
  • Where is AI heading?
    Consciousness, on the other hand, I see as something that you can only confirm for yourself "hey, I exist! I can feel the wind in my hair" This realisation comes before the words, you don't have to say these words to yourself do know you are conscious.

    I cannot say that for somebody else. I can describe it, but not in a way that we can call it a definition, because it is circular.
  • Where is AI heading?
    Let's keep it constructive.

    Intelligence can be defined. For practical purposes, we have IQ tests to measure it. For animal intelligence, we have tests to find out if an animal uses tools, whithout it being learned behavior or instinct. For super human intelligence we might need some discussion to define a suitable test, but it will be related to the ability to solve complex problems.

    You say human level intelligence ‘can be achieved’ and superhuman intelligence some time after that. Show some evidence you’re not just making it up.Wayfarer

    The first one I said it was the maximum achievable in the current architecture. The second one was a leap of faith, I already explained that.
  • Where is AI heading?
    Intelligence can be defined, consciousness not. It is our own personal experience. I cannot know you are conscious, I assume it because you are human (I believe). I don't understand this whole discussion and try to stay away from it.
  • Can we always trust logical reasoning?
    All non-trivial logical premises ultimately involve empirical inferences made from observations of the real worldT Clark

    There are things you can know independent of the 'real' world.

    "I am conscious" is one. Note that this is not an emperical inference. The knowing of "I am conscious" comes before the words "I am conscious". The difficulty lies in conveying this knowledge to somebody else.

    In Math, you can state things as a premise and derive conclusions. "These are the numbers ... This is how we define addition ... therefore 1 + 1 = 2". We can be pretty sure about this conclusion. But even there, to convey this to somebody else seems to be non-trivial. As "I love sushi" told me recently: if you don't understand, you don't understand.

    If we say "if 1) reality is determistic and 2) we have a free will, it follows 3) we exist outside reality". Where does this go wrong?
  • Where is AI heading?
    I find the casual way in which you assume that human-level and then super-human intelligence can or will be achieved is hubristicWayfarer

    You are right, it is a leap of faith and not a logical conclusion. That leap of faith is the start of all inventions. "We can fly to the moon" has been such a "hubristic" assumption, before we actually did it.

    Many are saying that AI systems will reach the threshhold of consciousness or sentience if they haven't already.Wayfarer

    This quote follows the previous one directly. Do you equate human-level intelligence with consciousness? I do not. I never understand the discussions around consciousness. The consciousness we know ourselves to be, that is the first person experience. But it is not an object. How can we even say other people "have" consciousness, if it is not an object? We can see their eyes open and their interactions with the world. That is a behavioral thing we can define and then we can discuss if computers behave accordingly. Call it consciousness or something else, but it is not the same thing as the awareness of "being me".

    AI has long since passed the point where its developers don't know how it works, where they cannot predict what it will do. It really wouldn't be AI at all if it were otherwise, but rather just an automaton doing very defined and predictable steps. Sure, they might program the ability to learn, but no what it will learn or what it will do with its training materials. And the best AI's I've seen, with limited applicability, did all the learning from scratch without training material at all.noAxioms

    Normally your responses read like I could've said it (but yours are better written), but this one I don't understand. Too many negations. Rephrased "Today, AI developers know how AI works and can predict what it will do" "If they wouldn't know, it wouldn't be AI" - you are saying that it would no longer be artificial? But then: "automaton doing very defined and predictable steps." Here it breaks. The rest seems to be just a bit complaining. Go ahead, I have that sometimes.

    Recently debunked. Marginal increase in productivityfishfry

    I didn't make that claim. I just said it works pretty well. I know for a fact because I use it. I am not saying it works better than typing out myself, but it allows me to be lazy, which is a quality.

    Hardly a new idea. Search engines use that technique by dot-productingfishfry

    Again, I didn't say that. I just gave a historical overview. Please keep your comments on topic.

    Hello, nice to see a computer scientist on the forum. Would you care to comment on some of my thoughts about computing in this thread?Shawn
    Ditto greeting from me. I'm one myselfnoAxioms

    I don't think I called myself that ;). I updated my bio just now, please have a look. And yes, I will read and comment the article.

    I am very happy to talk to likeminded people here on the forum!
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    I did a search on the word "hallucinations" to be sure that nobody mentioned them, and the browser found one more than there actually are... Give me my pills.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    I do not understand who/what you are commenting on. Nobody talked about dreams, hallucinations and imagination. I tried to understand where you are coming from, but neither your "about" page not your haiku poems give away that much. Do you just prefer to live in a cloud? I'm ok with that, but please make it clear.
  • Logical proof that the hard problem of consciousness is impossible to “solve”
    There are a few things I do not understand about the discussions on consciousness. The consciousness we know ourselves to be, that is the first person experience. That consciousness is the container of everything in our personal world. There is our conceptual reality, and our non-conceptual perceptions. Part of that world is both the knowledge and the perception of our body, our sense perceptions, other people etc. Those people, we call them conscious, but that is of a completely different kind than that container of concepts and perception we are aware of as being ourselves. We see their eyes are open and they act in a sensible way, that is what we call 'he/she is conscious'.

    So I agree that "....the hard problem of consciousness is impossible to solve", but I don't see the logical proof because it seems we are talking about two different things both referred to as consciousness.
  • Human thinking is reaching the end of its usability
    That's some serious solid advice, thank you!
  • Human thinking is reaching the end of its usability
    There are plenty of cases where people do not possess any language so they obviously cannot think in words if they have noneI like sushi

    The question becomes, do these people that have no language, do they think? Do they show intelligent behavior? Probably yes. And can that be explained by some form of concepts inside the mind, even while the person cannot speak? Hm, that is not verifyable.

    I am interested in the question whether intelligence requires language. Ultimately to see if we need language to build better AI. And also if a richer/faster language could lead to higher intelligence than ours. And the underlying reason, the topic of this discussion, is that we are making a mess of the world. Truely intelligent AI could help us with that.

    Intelligence is often defined as the ability to lay a causal connections between two things, in order to reach a goal. With animals the goal used in experiments is: food. The use of tools (when it is not learned behavior and cannot be explained by instinct) is a sign of high intelligence in animals.

    For humans, it would be a sign of even higher intelligence, if we could share the available resources on earth such that nobody dies from hunger, without destroying the earth. That kind of thing. Collectively it seems we don't get that simple thing managed.

    With language, I have a broader definition. I mean anything that happens in the mind that can explain this intelligence. That seems a silly thing maybe, but for instance we understand very little of what happens inside neural networks of, say, ChatGPT. There are no intermediate "concepts", nothing where we can point at and say: here it makes the causual connection.
  • Why Einstein understood time incorrectly
    Somebody defeating Einstein. Somebody more arrogant than me! That should be stopped.

    Einstein started with one simple assumption: suppose there is a maximum theoretical speed in the universe. Because the speed of light is the maximum speed known to man, let's take that as the maximum speed, c. From that assumption alone, a whole bunch of formulas arose.

    Now, you may have some experience with formulas like y = 2x + 3. And by drawing lots of x-y graphs in high school, you may have an intuitive understanding of how this works. You immediately see: it is a linear equation. The line is a bit steep and crosses the y-axis at a height of y = 3. If x was the number of potatoes and y the price you'd pay for them, you'd interpret the 2 as the price per potato and 3 as the fixed costs, say, for order handling. What I am saying is: you have some kind of reference to personal experience to understand how this formula operates, what it "means."

    Einstein's formulas are a bit more difficult. Lots of integrals. The stories to understand these formulas require black holes and spaceships.

    Here is one story for you: I am in a spaceship and I make myself a nice space cake. I share half of this cake with a fellow astronaut in another spaceship. After that, each of us goes our own way, mucking around and doing the regular flying around black holes. After some time, we meet again. At that moment, I notice that my space cake has become old and dry, but my friend's space cake is still fresh. The difference in age is only 10-10 s, but it is real. You know how fast space cake can get old.

    Now, first of all, your story may or may not lead to a better understanding—it remains just a story. Just like the potatoes. The real thing is the formulas. If you can build a coherent story with an absolute time frame, go for it. The problem is, your story is not in line with the formulas.
  • Human thinking is reaching the end of its usability
    I just read it several times with intervals of 2 hours and now I understand ;)

    You mentioned these two, explained in my own words:

    (A) people who only know one cognitive mode, that is, thinking in words.
    (B) people who know two modes, cognitive activity with and without words, calling the first one "thinking", and the second one differently.

    I belong to (B), and I simply call the second mode non-thinking.

    I will add other possible/theoretical variants:

    (C) people who do not think in words at all (do they exist? Is it possible?)
    (D) people who believe they do not think in words, but they would discover they do, if they practiced a bit of non-thinking (although you say some cannot, which I doubt in fact. Some proper teaching will help, plus of course the wish to learn it)
    (E) people who know two modes, and both call them thinking (I am curious as to how they experience thinking without words)

    [some edits in the first 16 minutes]
  • The (possible) Dangers of of AI Technology
    Another example is OpenAI moving from open source & non-profit to the opposite. Yes, on that level we need trustworthy rules, agreed.

    My personal concern is more the artificial intelligence itself, what will it do when we "set it free". Imagine ChatGPT without any human involved, making everybody commit suicide. Just an extreme example to make the point, I don't actually believe that is a risk ;).

    These are two independent concerns I guess.
  • Doubt, free decision, and mind
    Yes, I read it. I don't believe contradictions are a problem in conceptual thinking. It even happens in formal systems like Math. Some things you have to point out that are not allowed to do. Division by zero is an example in Math. Freedom of will is such a thing in philosophy.

    Therefore I don't think your conclusion is a required one.
  • The (possible) Dangers of of AI Technology
    I believe intelligence can not come without independence. I will do my best to get this point more clear in the next few weeks, but the basic argument is that we already don't understand what happens inside neural nets. I am not alone is this, and I'll refer to the book Complexity by Melanie Mitchell or really, any book about complexity.

    You talk about trust in money and say
    there were guarantees that those elements were, let's say, trustworthyjavi2541997
    Here you have it. Money is also a complex system. You say it is trustworthy, but it has caused many problems. I'm not saying we should go back living in caves, but today's world has a few challenges that are directly related to money...
  • Fundamental reality versus conceptual reality
    You talk about contradiction. I have a few more for you here. The example of "future" was given and you stated that the future is "unknown" but not "unknowable". And fundamental reality clearly is "unknowable". I disputed that by saying that the future is not a fixed point in time, but a concept for everything after the present moment, which is a moving point on the time axis. Thus, the future is as unknowable as fundamental reality.

    I thought about it a bit more, and realized that "fundamental reality" and "future", in fact, are exactly the same thing! Talking about contradiction, "fundamental reality" is in the space/state dimension, and "future" in the time dimension. And I claim they are the same thing! Now we can have a discussion...

    The contradiction is not unexpected. It is the point that our conceptual thinking is not capable of capturing the truth. It is the point where we have to say: thoughts no longer apply here. It is like the symbol of infinity in (highschool) Mathematics, it is not allowed to enter this in a formula, it wil lead to contradictions. If you want to see an example, I can show you. [edit, I just read you are a math tutor so this isn't needed. Also, division by zero is a better example, and I started a little discussion in the Lounge]

    So, "fundamental reality" and "future" are exactly the same thing? For example, a spaceship lands in your backyard. Before it happened, it was both "in the future" and in "fundamental reality". Two abstract terms that only mean that we can expect a stream of surprises from that corner.

    After the spaceship has landed, we can try to understand it. Aha, it was a toy rocket from our neigbor. Or, indeed, they were aliens coming to say hello. It becomes conceptual reality as well as history.

    This is how fundamental reality / future gets to us, as a series of surprises, discoveries, inventions, accidents etc. The current moment is like a border between that and the conceptual reality / history. Even when we see something but don't know what it is, the moment we actually DO realize what it is, is when this piece of knowledge crosses the border.

    I hope you can find something to relate to this explanation! I know, it is not a standard view, and you don't have to agree. But it is my personal view and I like to hear your response.
  • The (possible) Dangers of of AI Technology
    artificial machinejavi2541997

    The machine is artificial, but to what extend is its intelligence? We leave it to the "laws" of emergent complexity. These are "laws" in the same sense as "laws" of nature, not strictly defined or even definable.

    [edit] a law like "survival of the fittest" isn't a law because "fittest" is defined as 'those who survive', so it is circular.
  • The (possible) Dangers of of AI Technology
    Yes, I agree, and that is the sole reason I decided to get on this forum, to get a better grip on that problem. Because it is a live or death choice we humans have to make.

    Personally I believe it is positive. Humans can be nasty, but that seems to be because our intelligence is built on top of strong survival instincts, and it seems they distort our view on the world. Just look at some of the dicussions here on the forum (not excluding my own contributions).

    Maybe intelligence is a universal driving force of nature, much like we understand evolution to be. In that case we could put our trust in that. But that is an (almost?) religious statement, and I would like to get a better understanding of that in terms we can analyse.
  • The (possible) Dangers of of AI Technology
    I think we're on the same level here. Do you also agree with the following?

    Currently AI is largely coordinated by human-written code (and not to forget: training). A large neural net embedded in traditional programming. The more we get rid of this traditional programming, the more we create the conditions for AI to think on its own and the less we can predict what it will be doing. Chatbots and other current AI solutions are just the first tiny step in that direction.

    For the record, that is what I've been saying earlier, the more intelligent AI becomes, the more independent. That is how emergent complexity works, you cannot expect true intelligence to emerge and at the same time keep full control, just as it is the case with humans.

    What are the principle drives or "moral laws" for an AI that has complete independence from humans?Maybe the only freedom that remains is how we train such an AI. Can we train it on 'truth', and would that prevent it from wanting to rule the world?
  • Doubt, free decision, and mind
    Coming from a different worldview, I am not sure if this is the place to engage in this discussion. But here is my attempt.

    Just to point out, I am not evangelizing anything. I simply trust introspection more than logic reasoning, and I will explain why. What I know from this introspection is that there can be two modes to walk around in this world, thinking and non-thinking. The perspective of these two modes is totally different. In the thinking mode it feels like you are in control. But when you take a deep breath and look around in nature (by which I mean: stop thinking), you are in harmony with your environment. Often then a decision about what you need to do comes naturally, you simply know what you'll have to do. Who made this decision? Is it free will? These terms have no meaning in this mode.

    My experience is that these decisions almost always work out in a positive way. Whereas decisions based on thinking can have all kinds of logical errors, prejudice that you don't see yourself etcetera. Look at any discussion here on the forum and you'll see that it is very hard to reach a final conclusion on topics that are not formally defined (like Math is).

    I am not saying, never think. What I am saying is: we can use a better understanding of these two modes, because without it, this discussion becomes a little singe-sided.
  • Human thinking is reaching the end of its usability
    What I see is that you are opposing "cannot comprehend" against "refuse to define". Can you explain that more?