• Bartricks
    6k
    This is tedious. This isn't a message if I am a bot, right? Explain that without vindicating my argument
  • Echarmion
    2.7k
    This is tedious. This isn't a message if I am a bot, right? Explain that without vindicating my argumentBartricks

    Messages aren't physical. When we communicate, I'm really just talking to myself, that is I'm imagining what a mental model of you is saying. Nothing actually travels from your mind to mine here, that'd be telepathy.

    So really, it is a message unless I know based on other parts of my mental model that the message was caused by a process I don't consider sentient. From an epistemological perspective, truth and justification are congruent, since I can only ascertain truth via justification.

    The message stops being a message if I think you're a bot, not if you are a bot.
  • Mark Nyquist
    774
    Karl Popper's falsification principal would apply to your argument. For your argument to be valid it needs to be testable. For me a counter example proving it false would be representative content of inanimate objects. Since inanimate objects have no agency but are the source of some representative content, your argument is proven false.
  • Bartricks
    6k
    No, my argument is testable. It will be falsified if you can show a fallacy in the reasoning or show a premise to be false.
  • Bartricks
    6k
    Again: why is this apparent message not a message if I am a bot?
  • Mark Nyquist
    774
    So you observe an inanimate object like a pie in an oven, that results in representative content. Why the extraneous insertion of agency? The pie has no agency.
  • InPitzotl
    880
    This isn't a message if I am a bot, right? Explain that without vindicating my argumentBartricks
    But aren't we aware of it?

    You've spent your entire OP, and a big portion of this thread, trying to argue that an agent must intentionally create a message in order for it to be a message, and that's precisely what you're doing here. But furthermore, this is presumably your key argument for "premise 1" in your OP (that is what you said it was), which is this:
    1. If our faculties of awareness are wholly the product of unguided evolutionary forces, then they do not give us an awareness of anythingBartricks
    ...and you tie it in thusly:
    What explains this failure to know is the fact that no one was trying to convey to you that there was a pie in the oven by means of your dream states. ...

    So, in essence if our faculties of awareness - or rather, 'faculties of awareness' - are wholly the product of unguided evolutionary forces, then none of us are 'perceiving' reality at all.
    Bartricks

    ...so we're aware of the message. Therefore, it is a fact that someone was trying to convey a message to us. So how could you be a bot?
  • Bartricks
    6k
    Oh, this really isn't hard. What is the correct analysis of why this 'message' would not be a message if I am a bot?

    'But it is a message' is not an answer to that question, is it?

    So, what is the correct analysis of why this 'message' will not be a message if I am a bot?
  • Echarmion
    2.7k
    Again: why is this apparent message not a message if I am a bot?Bartricks

    That's simply the definition of the word, isn't it? A message is communication, and we don't consider a bot to have a mind that would communicate with us.
  • Bartricks
    6k
    Right! Top marks. It doesn't have a mind. It isn't 'trying' to communicate, because it doesn't have a mind - so it doesn't have goals, purposes, desires.

    So.....the message won't be a message at all. It won't have any 'representative contents'. It isn't functioning as a medium through which you are being told something. It just appears to be, but isn't.

    Now just apply that moral more generally and you get my position.
  • InPitzotl
    880
    Now just apply that moral more generally and you get my position.Bartricks
    No no no... you stopped too early. You stopped at your message point and didn't relate it to awareness (remember premise 1?)

    So let's not stop and handwave. Keep going:
    It doesn't have a mind. It isn't 'trying' to communicate, because it doesn't have a mind - so it doesn't have goals, purposes, desires.Bartricks
    It doesn't have a mind; it's not trying to communicate; it doesn't have goals, purposes, desires, and therefore, we (who do have minds, have goals, purposes, and desires) cannot be aware of... what?
  • Echarmion
    2.7k
    So.....the message won't be a message at all. It won't have any 'representative contents'. It isn't functioning as a medium through which you are being told something. It just appears to be, but isn't.Bartricks

    I don't think that follows. I am being told something, about the way the bot works for example. The message still represents something, it's just not communication.
  • Bartricks
    6k
    No you're not. See argument.
  • InPitzotl
    880
    No you're not. See argument.Bartricks
    Still no answer to my question. Maybe I can get to this through another angle. You see, here you're obsessed about making a point that messages have to be made by agents, and as a result you're having us play pretend that you are a bot.

    But I've got a real bot for you... it's called Garmin. Garmin sits in my car; it has no microphone in it, so I have to punch things onto its display. But it does mimic speech. I can go to a brand new location I've never been to and pull up restaurants in the area on the box, pick one, and drive to it. Then the thing starts barking apparent orders at me... things like: "In 1.8 miles turn right on Belmont street". On following some to most of those orders there will arrive a point at which it makes an apparent truth claim: "You have reached your destination". Now GPS devices similar to this are incredibly popular... so some variant of this situation happens some millions of times each day. And for now, let me just say that there's a reason they are popular.

    But this is all supposed to be your argument for premise 1:
    1. If our faculties of awareness are wholly the product of unguided evolutionary forces, then they do not give us an awareness of anythingBartricks
    ...so you're being asked to follow through. If your pie in the oven sky writing is proving we aren't aware of something because an agent didn't intentionally try to tell us pie is in the oven, then there must be something we aren't aware of with Garmin when it tells me "you have reached your destination", because Garmin isn't intentionally trying to tell us we've reached our destination either. Garmin is a bot if there ever was one.

    So what is this thing we're not aware of? It appears to me that there is no answer you can give to this question that doesn't expose a problem with your argument. So show me I'm wrong.
  • Bartricks
    6k
    Has Garmin been designed to give you information?
  • InPitzotl
    880
    Has Garmin been designed to give you information?Bartricks
    Yes. But:
    for it nevertheless remains the case that the pie was not trying to communicate with you (likewise for the clouds the pie created).Bartricks
    ...the destination was not trying to communicate with me; likewise for the Garmin.

    If you're going to use the argument, it has to be the argument you're using. Designed things cannot merely be special pleaded into an exception just because it happens to fit your premise; they have to be an exception specifically because your argument suggests it.
  • Bartricks
    6k
    Garmin is designed to give you information. So how the hell is it a counterexample? I am arguing that our faculties need to have been designed to do what they do in order for them to be capable of generating states with representative contents. You, to challenge this, then appeal to something that is designed to do something! How, exactly, does that work?
  • InPitzotl
    880
    I am arguing that our faculties need to have been designed to do what they do in order for them to be capable of generating states with representative contents.Bartricks
    Not really. it's premise 1:
    1. If our faculties of awareness are wholly the product of unguided evolutionary forces, then they do not give us an awareness of anythingBartricks
    ...that you're trying to argue for. But you're giving a particular argument that alleges to do so. That this argument supports that premise is the question.
    How, exactly, does that work?Bartricks
    Good question. Here's what you just got finished saying about a bot:
    This isn't a message if I am a bot, right? Explain that without vindicating my argumentBartricks
    It doesn't have a mind. It isn't 'trying' to communicate, because it doesn't have a mind - so it doesn't have goals, purposes, desires.
    So.....the message won't be a message at all. It won't have any 'representative contents'
    Bartricks
    So we have scenario 1. In this scenario there is some sign s that some entity x produced. In this case, s is a post, and x is Bartricks. You just said above that if x is a bot, then s is not a message. You just said above that if x is a bot, x doesn't have a mind; x isn't trying to communicate because it doesn't have a mind, x doesn't have goals, purposes, and desires. You just said above that therefore ("therefore" being a translation of "So.....") the alleged message won't be a message, and that it won't have any representative contents.

    Enter scenario 2. Here, s is "you have reached your destination". x is Garmin. If the above follows above, it should always follow, and therefore it should follow here. So if x is a bot, then s is not a message. x is a bot. Therefore, s is not a message, for all the reasons you gave in Scenario 1 about Bartricks-bot.

    You were happy to say Bartricks-bot isn't producing representative content. You patted Echarmion on the back about it, as if you were his proud papa. But suddenly you're calling foul when the bot is spelled with a capital G instead of a capital B. If there's a nuance with Garmin, there's a nuance with Bartricks. If your Bartricks argument is solid, then the Garmin argument is solid.

    So you tell me. How does this work?
  • Bartricks
    6k
    Once more: how does your example challenge my case?
    I am arguing that faculties need to be designed if they are to be capable of generating representative contents.

    Note: that's a necessary condition not sufficient.

    You are trying to challenge that with an example of something that is designed to impart information.

    How the hell is that going to challenge my case?

    Think about it....
  • InPitzotl
    880
    Once more: how does your example challenge my case?Bartricks
    It's your exact logic! You have a problem with Garmin that you don't have with Bartricks.

    If you cannot do something as simple as substitute tokens, all you're doing is faking having an argument.

    This isn't about me proving you don't have an argument. It's your argument; you're the one who is supposed to make it.
    You are trying to challenge that with an example of something that is designed to impart information.Bartricks
    Was Bartricks-bot designed to impart information? Funny the question never came up. With Bartricks you started with the premise it was a bot, and ended concluding there was no representative content, explaining why. All of those why's apply to Garmin, btw, despite it being designed.

    ETA: Allow me to get you started.

    ELIZA was designed to simulate a therapist. But the designer did not intend to... what? Bartricks-bot was (in any realistic imagination) designed to help people waste their times on nonsense. But the designer did not intend to... what? Garmin was designed to exploit the GPS system to help people navigate between locations. But that doesn't count because the designer intended to... what? Also note that the pie in the oven was baked by a person using a tool (under ordinary circumstances, bakers, which are humans, are the ones that put pie in ovens in such a context where it becomes non-obvious and thus necessary to communicate the same).

    Incidentally, potential counter... unguided evolutionary forces produce two agents. One agent designed a Garmin. The other one used it to reach a destination. How does your argument refute this counter?

    These things, btw, are the critical pieces of your argument. They're also the missing pieces. About all you're saying is that humans are involved when humans talk to humans, therefore invisible human like things made humans.
  • Bartricks
    6k
    What on earth are you on about?
    Here's my claim: our faculties need to have been designed to provide us with information before they can be said to generate states with representative content.

    You're trying to show this is false with an example of something that has been designed to give us information and is successfully doing so!!

    "Oh, but, but, but, bots - bots are designed and you used bots to make your case. Bots. Garmin. Bots. Bots."

    Bots are not designed to give information. They are designed to randomly generate 'messages'.

    But anyway, that will do nothing whatever to help you. For my case is in defence of a necessary condition for representative content, not a sufficient condition. And, once more, you cannot challenge my premise with a case that confirms it.

    Shall I help you? You need a clear case of representation generation that is NOT the product of anything designed.
  • RogueAI
    2.8k
    Well, it seems just as clear in this case that you did not acquire knowledge that there was a pie in your oven from those cloud shapes, just a true belief.Bartricks

    Yes, the justification is missing. However, what if this kept happening, even by fantastic coincidence: clouds keep spelling out true statements about the world to this one guy. Wouldn't he eventually be justified in assuming there's an agent at work with all these true cloud messages, even if he's not sure there's an agent at work?
  • InPitzotl
    880
    What on earth are you on about?Bartricks
    Your argument. I have mentioned that several times BTW.
    Here's my claim: our faculties need to have been designed to provide us with information before they can be said to generate states with representative content.Bartricks
    Okay. So what backs up that claim?
    You're trying to show this is false with an example of something that has been designed to give us information and is successfully doing so!!Bartricks
    Wrong!! See above. My problem is with your argument. Your claim does not follow from your argument. Incidentally, this makes everything below this line:
    But anyway, that will do nothing whatever to help you.Bartricks
    ...irrelevant.

    "Oh, but, but, but, bots - bots are designed and you used bots to make your case. Bots. Garmin. Bots. Bots."Bartricks
    Well... yes. You were the one who offered the Bartricks-bot argument; the logic I teased out from your argument when applied to Garmin shows that the Bartricks-bot argument doesn't follow. Now, as far as I'm concerned, you're just whining because I'm forcing you to do the work you claimed to have done in the first place.
    Bots are not designed to give information. They are designed to randomly generate 'messages'.Bartricks
    Okay... are you saying Garmin is not a bot then? If so, why not? What makes Bartricks-bot a bot and Garmin not one? Incidentally, I'm not asking you because I'm consulting the great wizard. I'm asking you because this is your argument you're supposed to be making.

    The only difference you have pointed out so far that could apply in this thread here is:
    no one was trying to convey to you that there was a pie in the ovenBartricks
    ...and that doesn't cut it here. Nobody was trying to convey to me that I have reached my destination. Whatever "Garmin is designed to give me information" means, Garmin is nevertheless not trying to do anything, because despite being designed, Garmin is not an agent. I don't care that Garmin was designed; you're the one telling me Garmin is distinct. But your argument does not provide this distinction.

    I'm fine with amendments, but what I'm not fine with is pretending you've made an argument you have not made.
  • RogueAI
    2.8k
    Keep in mind that Bart thinks God can create a square circle.Banno

    I actually agree with him. I'm not prepared, with my evolved little monkey brain, to say definitively what a god can/can't do.
  • RogueAI
    2.8k
    Since you were talking about knowledge:

    Suppose there's a world where, by fantastic coincidence, erosion patterns just happen to spell out (in a language the people understand) mathematical/scientific truths, and this has been going on since time immemorial. Also, by fantastic coincidence, erosion patterns that take the form of language never give false information- they're always accurate. Eventually the people of this world accumulate a huge store of accurate information about their world. But could it ever be said they know about their world?
  • Bartricks
    6k
    No, I don't think so.
  • RogueAI
    2.8k
    I don't either. Where I was going with it was: could the evolutionist say that we are justified in claiming we are aware of the world (we have justified true beliefs about the world), because those whose beliefs about the world didn't map on to reality (those who had false beliefs about the world) were weeded out by natural selection. So the fact that we're here after that long weeding out process is evidence that we have an innate ability for our beliefs to correspond to reality, and this innate ability, arrived at through evolution alone, would justify the claim: we are aware of the world.
  • Banno
    24.9k
    But does your little monkey brain enable you to understand what is possible and what is not? A contradiction is a problem with grammar, not with how things are. If Bart holds that god can perform acts that involve contradiction, then he excludes himself from rational discourse.
  • Banno
    24.9k
    Eventually the people of this world accumulate a huge store of accurate information about their world. But could it ever be said they know about their world?RogueAI

    IF they recognise the patterns as telling them truths, they must by that very fact have an independent way of verifying the truth of the markings. Hence yes, they are recording what they know.

    That is, in order to recognise that the markings are making true statements, they must be able to independently verify their truth.

    What's this to do with?
  • khaled
    3.5k
    because those whose beliefs about the world didn't map on to reality (those who had false beliefs about the world) were weeded out by natural selection.RogueAI

    Is not proven. If anything there are plenty of situations where hiding useless information about the world is better for your survival instead of having an accurate mental representation of everything.

    So no I don't think the evolutionist can claim we have knowledge of the world "as it is". But then again, who cares about the world "as it is"? What matters is how it seems because that's all we have access to anyways.

    In other words, even given that the evolutionist can't do it, who exactly can say that we know the world "as it is"? How would they ever know when they only have access to the way the world seems (by definition), just like the rest of us? They would just have to arbitrarily claim that their representations are not faulty. The evolutionist at least has a weak argument for why they may not be faulty (that in general, an accurate representation of reality is better for survival, even if sometimes it isn't)
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.