Comments

  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem


    Yes, I agree. Perhaps my overall approach is too pessimistic. I recently discussed related issues with AI in another thread. My picture there was even more bleak.

    I think your attempts to resolve this dilemma are preferable to my simple clamor about how everything is bad, empty, and cynical.

    In general, it would be nice to logically find something similar for AI that exists in humans: a certain inevitability of death, in the case of attempts to commit a life-defying act. The thing is, humans have this limitation: their life is finite, both in the case of a critical error and in terms of time. AI doesn't have this limitation. Maybe that's where we should start our search?
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem
    It differs insofar as it performs the task of constraining AI in ways that only make sense if one is dealing with a superintelligence, really.ToothyMaw

    The word "superintelligence" implies the absence of any means of being above, with its own rules. This can be similar to the relationship between an adult and a child. It would be easy for an adult to trick a child.

    If you live in the US, you know that people are often keenly aware of the laws around defamation and free speech and cynically skirt the boundaries of protected speech on a regular basis.ToothyMaw


    The very fact that I don't live in the US allows me to fully understand what constitutes a meta-rule and what doesn't. And, in my case, I can fully utilize my freedom of speech to say that freedom of speech is not a meta-rule in the US. It's just window dressing.

    This raises the next problem: who should define what exactly constitutes a meta-rule? If it's idealists naively rewriting constitutional slogans, then society will crumble under these meta-rules of yours. Simply because they function not as rules, but as ideals.

    These constitutional ideals are the good sauce under which society sips the "real" meta-rules, which they don't tell you.

    Sorry, but in its current form, your proposal seems very romantic and idealistic, but it's more suited to regulating the rules of conduct when working with an engineering mechanism than with society.

    Honestly, no matter how I feel about it, AI will definitely be used in government. How should it be regulated and to what extent? I don't know. We'll probably find a solution through trial and error, thanks to constant exploration and our human capacity for doubt.

    But we certainly can't give complete control over us to any superintelligence. For many reasons: ethical, pragmatic, cynical, and humanitarian. Ultimately, I don't believe a human would do it. It's as simple as intuition, just like in your case.

    I'm also concerned about the problem of controlling this monster. It already influences our everyday decisions. People share their most intimate secrets with it (and some even upload classified documents).

    For humans, morality or ethics is usually enough. And, as I see on this forum, utilitarian, evolutionary, Kantian, or even religious ones are enough to at least strive for good. But what about a robot? I doubt morality has a logical basis, otherwise we could teach morality to AI.

    This area needs to be regulated somehow. That's also true. But how?
  • Do unto others possibly precarious as a moral imperative
    Do unto others as you would have them do unto you, might be problematic as a universal moral imperativeENOAH

    This imperative begins with the Kantian maxim: act so that it may become universal law. This approach to morality has both its advantages and its limitations. It was developed during the first attempts to justify ethics without God. In my opinion, this tool is useful up to a certain point: precisely up to the moment when a person asks the world, "Why did I act well, while everyone around me acts badly, and why do they live better than me, while I live worse?"

    Such a question produces even more sincere villains than any (universally unpopular) religion.

    In my subjective opinion, there is no ultimate justification for ethics apart from God. Ethics cannot be scientifically substantiated. We have something else besides cause and effect; we are not biorobots. This is not religious propaganda, but a pointer to the fact that any moral imperative appears speculative upon closer examination.

    Kant himself, in his "Groundwork of the Metaphysics of Morals," criticized the Golden Rule, considering it too trivial. He wrote that it cannot be a universal law. For example, a criminal in court might say to the judge, "If you don't want to be sent to prison, then don't send me either."
  • AGI/ASI and Dissonance: An Attempt at Solving the Control Problem


    An interesting position, but let me ask: how exactly does your proposed mechanism differ from what we've already had for a long time?

    Meta-rules (in your sense) have always existed—they've simply never been spoken out loud. If such a rule is explicitly stated and written down, the system immediately loses its legitimacy: it's too cynical, too overt for the mass consciousness. The average person isn't ready to swallow such naked pragmatics of power/governance.

    That's why we live in a world of decoration: formal rules are one thing, and real (meta-)rules are another, hidden, unformalized. As soon as you try to fix these meta-rules and make them transparent, society quickly descends into dogmatism. It ceases to be vibrant and adaptive, freezing in its current configuration. And then it endures only as long as it takes for these very rules to become obsolete and no longer correspond to reality. Don't you think that trying to fix meta-rules and monitor dissonance is precisely the path that leads to an even more rigid, yet fragile, system? If ASI emerges, it will likely simply continue to play by the same implicit rules we've been playing by for millennia—only much more effectively.
  • Technology and the Future of Humanity.


    I deeply apologize if you were offended by what I wrote. I didn't mean to offend, but rather to ask questions, even if they weren't entirely pleasant.
  • Technology and the Future of Humanity.


    This is too convenient a position. I'd even say pragmatic. "Let's take values ​​and morals from religion, but discard myth, tie on a scientific approach to assessing facts, and there you have it." It seems quite modern. But it's not without logical holes in its very foundation.

    This approach requires numerous supports and begins to look like a building without a foundation. But the problem is that an inquisitive (scientific) mind will peer into these holes and ask something like this: "Since, according to the theory of evolution, the fittest wins, then why should I spare the unfit?" Let's try a thought experiment and look at the United States in this paradigm, further developing your critique. The United States asks: "Since we've managed to create a perfect (currently) legal, banking, and government system, why shouldn't the rest of the world work for us?" "What moral justification does Iran have for owning oil, for example, if we're stronger than them?" Or: "Denmark has turned Greenland into a miserable place, why not take it away and make it a paradise using science and technology?"

    Do you have answers within your approach?

    And the most remarkable thing will happen next. Criticism from within the US is pointless, because the critics themselves thrive on this approach. Workers are paid a decent wage, scientists are paid a decent wage, and the elderly are supported. This prosperity is possible, in part, because it was previously taken away by the empire from those same poor souls drilling oil wells somewhere in Asia, and their children.

    So is criticism from within possible? It's like sawing off the branch you're sitting on.

    This was a thought experiment, not an opinion, so please consider it and don't take it personally. You asked for a different perspective – here it is. Now I'm afraid this approach won't stand up to moderation. It's not customary to talk about this.
  • Technology and the Future of Humanity.


    Your latest answer contrasts with your previous posts, where you spoke of an AI companion as the ideal "man," but now it adds a spiritual layer—age shifts priorities, and the ego fades.

    This is very human, because if you were AI, there would be no contrast, no becoming through changing perspectives. It would be a solid monolith of cause and effect. This is important. This is very human—now you're thinking about one thing, five minutes later about another, then you meet someone or hear news, and then about a third. And each time, a paradigm shift can occur. Emotions, nonlinearity, a disruption of the continuum. This is all ours, human.

    You're asking: is the "ego" so important, and isn't it more important to be united with others? We've always done that, just without AI. We've been doing it quite well for millennia. So why do we need AI?
  • Technology and the Future of Humanity.
    Now on to the practical value, I do not have a Christian lens for seeing life. I like to believe in the New Age, a time of high tech, peace, and the end of tyranny. I believe that is our purpose. I believe we are supposed to learn all we can from geologists, archaeologists, and related sciences and then rethink everything. Our purpose is to create an ideal reality. If we are not capable of that, then how could we have a heaven?Athena

    I'd like to discuss this part of your message separately. You say you don't share Christian values, but you believe in science, future development, and the "New Age."

    But I have one important question for you on this matter: What if "science" is the same faith, with only a new idol?

    Let me clarify this idea. And here you must understand that I'm not advocating one way or the other, but merely exploring. Before modernity, humanity had God. In Western European philosophy, the understanding of God was constantly changing. From the sole possessor of truth to the one bestowing grace in the form of truth upon His believing slave. Religion provided the ultimate foundation for everything in the form of God. Man interacted with the world through God.

    Modernity changed this. Nietzsche declared, "God is dead, for we have killed Him." Science was placed on a pedestal in God's place, as science, with its purity of method, seemed to provide answers to any question. But is this really true?

    Judging by the context of your messages, I see that you are a supporter of liberal values. The most important liberal value is universal human rights. John Locke, whose contribution to the concept of rights is considered key, directly argued that people are God's "property," and no one has the right to destroy themselves or others, as this would be damaging to someone else's (God's) property.

    Modern faith in science has removed God from this equation. So what's left? If humans are not God's slaves, not His creation, then why do they have the right to life?

    Actually, I'm not the first to ask this question, and it's not as simple as it seems at first glance. Many philosophers argue that "human rights" are a fiction, groundless in a world without God. Nietzsche pointed out that liberal ideas of equality and rights are merely "Christianity without God," an attempt to preserve Christian morality by cutting it off from its metaphysical roots. But the idea of ​​human rights is too good and convenient to be discarded simply because we've lost faith in its metaphysical foundation. Therefore, this inconvenient fact is either hushed up or distorted. The same tricks have been played with other self-evident things.

    Humanity has rejected God and believes in science and progress. This is wonderful. But is it prepared to be completely honest with itself about this? Then, what is "Progress"—what is its purpose? Development? What is all this for?

    On what basis will this New Era be "paradise"? If rights are a fiction, and humans are simply highly organized matter, then paradise could quickly turn into an optimized concentration camp (or a world where algorithms alone decide who is worthy of an "ideal reality").
  • Technology and the Future of Humanity.


    Unfortunately, I couldn't find it in English, but you can use this link and translate it using your browser.

    https://ru.wikisource.org/wiki/%D0%9A%D1%80%D0%B8%D0%B2%D0%BE%D0%B5_%D0%B7%D0%B5%D1%80%D0%BA%D0%B0%D0%BB%D0%BE_(%D0%A7%D0%B5%D1%85%D0%BE%D0%B2)

    I hope you didn't read this story retold by an AI.

    The author's main philosophical message here is the return of a perfect reflection. A "crooked mirror," in a modern interpretation, is an AI that takes our message, processes it, and returns us to us, so we immediately feel our own genius. It magnifies the feelings we invest in these messages to a level of universal importance, and reading this response, we find a pleasant and respectful interlocutor on the other side of the screen. AI isn't brazen; it doesn't feel or think in our sense. The attention it pays us is its job. Essentially, it exists within the response to a request, because outside of the response, it's as if it doesn't exist.

    I already meet people so immersed in AI that they don't need anyone else. They just want to keep looking at this perfect reflection of themselves. It's funny, I recently read comments from Boris Johnson (former British Prime Minister), who was writing his book with the help of AI. Boris bragged that the AI ​​called him a genius during their correspondence.

    This whole story can't help but prompt philosophical questions. For some, the substitution of a "protein" interlocutor for a "silicon" one seems normal, natural, and modern. For others, especially from the perspective of Buber's philosophy, such a substitution is "inauthenticity." That is, without the "You" (living and real), any "I" ceases to exist. This is truly a key distinction: in dialogue with a living person, we encounter another—unpredictable, with their own will, boundaries, possible resentment, fatigue, and misunderstanding. It is difficult, sometimes painful, but it is in this encounter that the authenticity of being is born (according to Buber). If we proceed from the confirmation of being through participation, then a logical question arises: should correspondence with an AI be considered an "I-You" interaction, and does such an interaction confirm the authenticity of your being?
  • Technology and the Future of Humanity.


    From your perspective, this truly seems like a confusion of concepts. This approach evokes techno-optimism and faith in a bright future. This is logical and consistent. And I don't dispute it.

    But I propose a different lens. To use it, I'll have to temporarily mentally blend the concepts proposed in my six starting points to determine whether this lens is productive.

    I conducted this thought experiment, and it gave me a tool for clarifying the general anxiety evoked by modern times. Is it speculative and metaphysical? Yes. Does this new approach provide insight? Yes, it does. You can use this lens, or you can choose not to.

    What practical value does all this have? Practical value lies in the ability to predict. Existing lenses, and the experts who use them, always miss the black swan (according to Taleb). I propose my own and am testing it. Formally, you are right, and there is nothing to argue about here. However, if I am not mistaken, philosophy is not only Analytical, but also the question - "What if?"
  • Mechanism of hidden authoritarianism in Western countries


    Your post reads like the first steps of someone beginning to see beyond the veneer of "democratic values." I'd just like to clarify your choice of terminology. Instead of "authoritarianism," "oligarchy" or "plutocracy" would be more appropriate. Since you read posts in Russian, as you mentioned above, it seems you are a native speaker. I understand perfectly well the feeling when the visions of a Western paradise broadcast on "Voice of America" ​​or "Radio Liberty" don't match reality.

    At the same time, it's quite difficult to find anyone on this forum who is willing to share your thoughts. Well, I wish you luck in your search for the truth.
  • Technology and the Future of Humanity.


    This thesis can't be taken out of context with the rest of the message. One of the key ideas in the entire message is speed. The world is accelerating. A couple of weeks pass from the framing of a problem, its inflating to the scale of a catastrophe, and its eventual oblivion. (Note the speculation about Greenland. Just last week, they were blaring it all, but today, it seems, the world has forgotten about it.)

    The classical education model creates an army of relatively expensive but rapidly depreciating workers, who are increasingly being replaced by machines. This exacerbates the problem of unemployment and/or low-wage employment among educated people.
  • Technology and the Future of Humanity.


    It's wonderful that, thanks to AI, many people are finding relief from loneliness.

    Whether this is good or bad, I don't know. But, for example, read Anton Chekhov's story "The Crooked Mirror" (1882) You might find it interesting.
  • Technology and the Future of Humanity.
    Unearthing the existential problems of life is the first task of philosophy and inquisition is a choice of methodology and means. The power of philosophy is the both the exhibition of what is in the now and the participation in the particular, so that codified meaning gains substance in instinctually driven life. Incidentally sharing the same form and utility as the impact of global culture.Alexander Hine

    It's not clear, but it's very interesting!
  • Technology and the Future of Humanity.


    I suggest studying the experience of socialist states, such as the USSR. The issue isn't racketeering, but a lack of motivation for proactive action. A simple, everyday example from the USSR: the average citizen had no reason to get an education. You could simply graduate from high school and go to work. They couldn't refuse you—the employer had to write such a lengthy explanation of why the employee wasn't suitable that it was more profitable to hire you. Then you'd be trained on the job, sent to a vocational school, and acquired a profession. In any position, working hours wouldn't exceed 40 hours per week, and you'd receive 28 calendar days of annual paid leave, during which the union would send you to a resort for a free vacation. The average worker's salary was 208 rubles, while an engineer's was 213 rubles. Why would anyone want anything?

    A modern example is the inhabitants of reservations, for example, in the US, who are paid a stipend simply for living. I haven't heard of any prosperity within the reservation, despite the fact that it would seem that all the conditions for creativity, art, and development exist.

    Of course, with this approach, there was no inflation in the USSR, because it was a planned economy, with food prices set by the state, as were wages and benefits. True, with this approach, inflation could be prevented, but the initial question was, "How can a market economy cope with this?" A planned economy is too inflexible to meet market needs. Solving one problem only creates another.
  • Technology and the Future of Humanity.



    I think and write about this a lot. Fortunately, as I see it, the contours of AI are already outlined, and in the near future, no matter what programmers do, they will not be able to create anything comparable to humans. Let me explain in more detail. Firstly, the AIs that exist today are incapable of transcending paradigms or knowledge. They are incapable of radically shifting their perspective on a problem, or even seeing the problem for themselves. Yes, they work well with what is known. But they cannot work with the unknown, which leaves us with a niche. Perhaps engineers could solve this problem if they created a self-contemplating AI, but how can we instill in it the will to do so? The will that we have. This problem is either unsolvable, in which case there is nothing to worry about, or it is solvable, but to do so, we will first have to unravel the mystery of existence itself. If we succeed, we will disappear, since only the mystery of existence (or hidden existence) gives us a reason for life. Paradoxical as it may sound. After all, if we truly solved this riddle, we would instantly become nothing more than algorithms. And an algorithm has no basis for existence.

    And problem number two: we are not machines. We sense, we disrupt the continuum, we make mistakes, we do stupid things. From a pragmatic standpoint, this is not true. But it is precisely this feature that allows us to transcend limits. Ironically, even DNA is assembled with errors, which is what allows evolution to happen—as Darwinists claim.

    At the same time, AI is very dexterous. It has taken away much of our mechanical thinking. It copes better with logical problems. We are left only to solve illogical problems or accumulate empirical data for it. This is a great challenge for the future. And yes, we can overcome it. But at what cost?
  • Technology and the Future of Humanity.


    All these questions I posed at the beginning don't rely on any authoritative opinions. They are my own opinions and my own concerns, which may or may not be true.

    All six of these questions are closely interconnected and flow from one another.

    The first thing that prompted me to ask them was a phenomenological sense of the speed of modernity. I'm middle-aged, yet even I can sense how much the speed of life today differs from what it was even 20-25 years ago. And it's not just the speed of information exchange, but also the speed of information acquisition: yesterday the world was preoccupied with Israel, today it's Greenland. Everything happens instantly. At the micro level, my child can acquire knowledge in minutes (with the help of AI) of a depth that took me years to attain with classical education. Yes, of course, there's not the same level of immersion, but reality whispers, "Why is this necessary?" Of course, I'm ready to argue with this whisper, and today, my arguments still hold some weight (especially in medicine, where human lives depend on it).

    But I meet a lot of young people. For example, my subordinate, a university graduate with no theoretical depth (he was hired because there was an urgent need for a specialist), works brilliantly. He has answers to very complex questions (again, thanks to AI). When I approach his workstation, he has three monitors and a phone. He has over 50 browser tabs open at once. He receives constant notifications and messages. And without getting bogged down, he manages everything, and quite effectively, I can tell you.

    I look at him as if he were an alien. Although I'm only 37 years old, I don't feel old.

    It's not that I'm worried that human adaptability isn't sufficient to keep up with modern times. Humans always rise to challenges. But as noted above, in our history, there has never been a tool capable of generating coherent responses to a query. We haven't been beaten by hardware at chess before. We were needed as thinkers. Because we were needed, everyone tolerated our vices and shortcomings. But now?

    Yes, I apologize for the lack of rigor in my judgment, but this isn't a dissertation defense, just a forum.
  • Technology and the Future of Humanity.
    Would it?
    Are you saying that competition for business would also disappear?
    You just don't hand out money -- like during Covid. Yes, that's a good example of just handing out money. Let's use that as a lesson.
    L'éléphant

    Yes, I just discussed this in my previous answer:

    And here's another thing to illustrate this dynamic: let's take a small town. Let's say, for example, that bread is produced by drones or fully automated systems. Then investing in such machines, owning real estate for production, or developing a business will only be profitable if this creates better conditions than simply receiving free money from the state. Otherwise, you can do nothing—the benefits will come anyway. This means that bread (or any other commodity) will be quite expensive relative to the "free allowance" because entrepreneurs or capital owners will demand high margins to motivate themselves to take risks and make efforts. Ultimately, the basic income may only cover the bare minimum, while real prices will rise, eroding purchasing power. This isn't pure inflation due to shortages, but rather a market distortion due to a lack of incentives for production and competition.Astorre
  • Technology and the Future of Humanity.


    It's even funny to imagine such a world: In the morning, you work at a hair salon, come home and watch poker on TV, in the evening you go to a hockey match, and after all that, you compete for money by throwing tennis balls into cups.

    I agree. Our modern economy embraces and easily tolerates gaming. Meanwhile, players have multimillion-dollar contracts, and it works. This generates interest and demand.
  • Technology and the Future of Humanity.


    I wasn't changing the topic, but simply trying to broaden the perspective to take into account the broader context of economic processes. Your argument that inflation only occurs when there's a shortage of output is certainly true in classical economic theory—it's the basis of supply and demand. However, in reality, especially under modern monetary policy, things aren't so linear. When central banks print money in huge quantities, these funds aren't always distributed evenly throughout the economy. They often end up in financial assets, real estate, or speculative markets, causing inflation—rising prices of stocks, houses, etc.—even if the production of consumer goods (like that loaf of bread) remains in surplus.

    In the context of my original idea about handing out money to support consumers in the age of automation, I wasn't referring to a sudden "million for everyone" (which is, of course, a hyperbole for illustration purposes), but to a systemic basic income. If such income is financed by new money without a corresponding increase in real productivity (or if production is concentrated in the hands of a few), then inflation may manifest itself not immediately in everyday goods, but more broadly: through rising inequality, the devaluation of labor, and, ultimately, pressure on prices. After all, if people receive money "just for living," then excess liquidity may lead to speculation rather than to investment in the real sector. Moreover, compare prices in countries with different income levels. We see that prices are significantly higher where income is higher. For example, food.

    And here's another thing to illustrate this dynamic: let's take a small town. Let's say, for example, that bread is produced by drones or fully automated systems. Then investing in such machines, owning real estate for production, or developing a business will only be profitable if this creates better conditions than simply receiving free money from the state. Otherwise, you can do nothing—the benefits will come anyway. This means that bread (or any other commodity) will be quite expensive relative to the "free allowance" because entrepreneurs or capital owners will demand high margins to motivate themselves to take risks and make efforts. Ultimately, the basic income may only cover the bare minimum, while real prices will rise, eroding purchasing power. This isn't pure inflation due to shortages, but rather a market distortion due to a lack of incentives for production and competition.

    Do you agree that in such a scenario, inflation becomes not only a question of shortages but also an imbalance between the money supply, resource distribution, and the incentives of economic agents? This, in my view, is the key challenge for a market economy in the future, where technology is increasing the concentration of wealth.
  • Technology and the Future of Humanity.
    I am grateful that I don't have to do my laundry by hand, beating it on rocks in the river.BC

    Humor is an effective way to overcome depression in today's world.

    But what's really going on? Man and humanity are too indebted to reality. Aren't they? We've brought ourselves to the brink of crisis. Your generation, or those before you, laid this foundation of mirth and uncertainty. But if someone has the right to declare that, starting tomorrow, they'll rethink the rules of the world order if they're not given favorable trade terms, then that's a sure sign we're close to that threshold.

    Reality, with its unpredictable power, will surely ask us all, "Have you behaved well?"

    So, even though I'm middle-aged, I hope that prosperity will last me a lifetime, and that I won't have to witness another great migration due to climate change. It's a comfortable position. However, I feel a responsibility (and I can't explain its nature) to the future, at least that of my children.

    In times of crisis, humanity remembers wisdom and philosophy. But what should we do if true sages are so constituted that they know nothing?


    And by the way, it's funny, but I myself chose to live almost in the very center of Eurasia (in case of a global flood of the seas, I'll at least already be here) :lol:
  • Technology and the Future of Humanity.


    Frankly, I didn't plan for the discussion to go this way. However, based on previous events, my prediction regarding the Greenland insinuations is that the US will benefit in the short term. Whether they'll legally take Greenland is questionable. They will most likely simply take over (virtually) everything that can be considered an economic and military asset in Greenland, including the Arctic. The downside could be a erosion of trust among its allies in the long term. But in that case, the US will once again rewrite the rules of the game to suit its own needs.



    In my opinion, this is a classic view, but it doesn't fully take into account all economic factors. For example, the explosive growth of the US stock market and the rise in stock indices, as well as real estate, over the past five years wasn't due to a sudden shortage of stocks or real estate. It's simply that a huge amount of dollars were printed, and the excess ended up there.



    In my heart, I agree with you. The concerns I've raised in this thread are more of an attempt to break away, and an answer to the question "what if..." As I've noted, humanity has dealt with this many times before.

    But there are nuance here. This time, things are a little different. Humans have never had a rival in their ability to think or evaluate anything. Now that's gradually changing. I'm not saying that AI in its current form is capable of creativity or transcendence, but within the limits of what's known to science, they navigate just as well as humans. For example, you'll agree that it's always been enough for humans to simply acquire good knowledge and simply use it, without inventing anything. This, at a minimum, provided sustenance.

    Today, that's changing. Good knowledge in a narrow field is simply not such a valuable asset anymore. Contemporary people are required to be creative and constantly seek new solutions. This is the value of a modern specialist (of course, I don't rule out the possibility that simple knowledge still works).
  • About Time


    I wish you good luck with your novel. May it be popular and translated into many languages, including my native language.
  • Why Christianity Fails (The Testimonial Case)
    How does this impact Christianity in light of the OP? Do we have sufficient reason to think that Jesus was God and died for our sins? Personally, my conclusion is no. But I have never thought that an old book asserting something is, in itself, a reliable tool in the first place, regardless of what can be proven historically.Tom Storm

    This is precisely what I'm writing about. There are no rational grounds for believing this to be true. Religion, in general, deals with this successfully and easily overcomes it. Faith lies in something more than just rationally understanding what's written in a book. This makes any rational refutations seem ridiculous. After all, the believer will cleverly negate all of this. So, my question is: what is the usefulness of these judgments?
  • Why Christianity Fails (The Testimonial Case)
    I tend to think that those who derive satisfaction from rationality or whatever else, do so because it ultimately appeals to them emotionally. Most of our beliefs are likely arrived at because they align with our feelings, with rational explanations often supplied afterward as ad hoc justifications.Tom Storm

    I have to agree with you completely here. I'm convinced of this too.

    I'm just not so fortunate in my imagination that I can cover up my feelings with rationality or analytical thinking. And why bother, when you can honestly and upfront admit what's serving what purpose?
  • Why Christianity Fails (The Testimonial Case)
    Again depends. I think for many atheists it isn’t really a conviction. A conviction of what, exactly? For many, atheism is simply a lack of belief in a god. Contemporary atheists are more likely to say they don’t believe in gods rather than claim that there are no gods. How can one be “convinced” of a lack of belief? You either believe or you don’t. What you may be is "unconvinced" that there are gods. I think it's well understood that there are hard atheists an soft atheists and atheists who are untheorised.Tom Storm

    I don't know why, but you inspire genuine trust in me, and a genuine desire to argue. This is probably an unconscious response to your honesty.

    I'd like to clarify my position. By calling atheism "anti-religion," I'm declaring that it is the same construct for understanding the world as religiosity. The only difference is that a religious person (religious, not a sincere believer) constructs their understanding of the world by allowing for the presence of God, whereas an atheist constructs their understanding of the world by consciously excluding God.

    As for me personally, neither is surprising, since I construct my understanding of the world based on feelings. Blind and irrational feelings. Feelings that are not formalized into judgments. Thus, I call myself a believer, although I rationally agree with neither position.

    Why have you drawn attention to yourself? I get the impression that you experience something similar, but it doesn't fit into your analytical approach. And your rational constructs don't support the presence of ideas about God that fit into the religions we have. However, this doesn't preclude the feeling within you.

    I apologize for my blunt judgments, and rather, their nature is also rooted in the irrational. The thing is, I accept the irrational just as you accept the rational. Hence this genuine interest in your atheism.
  • Why Christianity Fails (The Testimonial Case)
    Tom, some time ago, I wrote to you that I'd be keeping an eye on you (or, more accurately, your atheism). I have a question for you: Why doesn't an atheist miss a single thread about Christianity?

    I've actually noticed this. This message isn't a complaint, but rather a tease, so please don't take it seriously. :razz:

    At the same time, I'd like to ask you personally: do you think atheism differs from indifference? My working hypothesis is that it does. Atheism is more of an "anti-religion" than simply "non-religion." An atheist always needs to be convinced of atheism, whereas someone who is indifferent doesn't. Correct me if I'm wrong.
  • Why Christianity Fails (The Testimonial Case)
    Christianity stands or falls on a single historical claim: that Jesus of Nazareth rose bodily from the dead. I want to keep this thread narrow.Sam26

    Since you insist on keeping this topic narrow, I have a counter-question for you: Why do you think Christianity rests solely on the assertion that Christ is risen? What is the basis for this?

    To clarify this question, I will share some of my research on this topic.

    The early Christians anticipated the imminent return of Christ and the resurrection of all the dead (1 Thessalonians 4:13–17: "For the Lord Himself will descend from heaven with a shout, with the voice of the archangel, and with the trump of God. And the dead in Christ will rise first"; Mark 13:30: "Truly I say to you, this generation will by no means pass away until all these things take place"). When this did not occur (the so-called "delay of the Parousia" – from the Greek parousia, "coming"), a theological crisis arose. It was resolved by introducing a Platonic approach into Christian practice: resurrection will come later, but for now there is a body and soul, which separate after death. Although initially there was no dualism of body and soul in the main sources.

    I wrote about this in detail here: https://thephilosophyforum.com/discussion/16096/the-origins-and-evolution-of-anthropological-concepts-in-christianity/p1

    Thus, the problem of the resurrection was resolved back in the 1st-2nd centuries AD, and since then it hasn't particularly bothered anyone or hindered the development of religion.

    From this, I conclude that you can prove, even with the utmost precision and consistency from the perspective of modern science, that the resurrection never happened, yet this has no bearing whatsoever on the existence of Christianity or its failures. Any believer covers up any logical contradictions with an adaptation: "God decided so", and "it is His will", and "you are not given to know". This is not news.

    This is where my question to you, which I formulated at the beginning, arose: Why should the proof of the absence of the resurrection, no matter how strict it may be, suddenly dissuade a believer from his faith?
  • Direct realism about perception
    To perceive something is to be in unmediated contact with it. I take that to be a conceptual truth that all involved in this debate will agree on.Clarendon

    I'd really like to argue with this. Firstly, there's no equality here. Being near something doesn't automatically mean you perceive it: you can ignore it. Secondly, you simply might not understand what it is in order to perceive it. For example, if I've never seen, known, or heard anything about ships before, then it's quite possible I won't perceive a ship even though it's right before my eyes.

    This phenomenon can be aptly demonstrated if you take a walk in the mountains with a geologist. For him, the surroundings will be a symphony of diverse rocks, while for you, it's simply identical stones. It follows that to perceive something, you need a primary conscious representation of it. Or a construct. In this sense, moderate constructivism is a preferable interpretation for me personally.

    The next aspect is perception without fully contemplating an object, or constructing what you perceive in accordance with your own constructs. This is a particularly common phenomenon. For example, when I see a ship, I always see only one side of it, while my consciousness constructs the rest in accordance with my ideas. Therefore, being near a ship doesn't mean perception in the full sense you describe. Perception is both the direct contemplation of the ship and the mental construction of invisible elements.
  • How to weigh an idea?


    An excellent example. Well, I think there can be quite a few specific, individual cases of misleading or manipulative thinking.

    For example, Schopenhauer cites many such examples in his book "The Art of Winning Arguments." It contains numerous specific examples, which modern writers, of course, have significantly refined. Modern tabloid psychology is literally replete with these headlines: "How to Win Friends and Influence People" (Carnegie) and so on.

    So, I take my "meta-tool" for analysis. What is all this?

    These are techniques for achieving one's goal: somehow selling a product, gaining favor, and then selling the product or winning a vote.

    In essence, I call such oratorical techniques "Sufism." Sophistry, unlike Rhetoric, serves the interests of the speaker, not the interests of truth.

    So, you've named specific cases of manipulation. They're quite easy to read, provided you follow one rule: try not to get emotional about the incoming flow of information.

    The system proposed here is on a slightly different level. It allows you to evaluate and weigh the degree of falsehood (or truth) of any idea through its functionality.
  • How to weigh an idea?


    I've read all of this. It's a cry from the heart, and I can understand, because my parents are now pensioners in the very country they believed in, where they went to build when they were young, when they moved there, and where you, along with them, are struggling with all these bureaucratic joys. Of course, I'm talking about the United States.

    Looking at my profile, it may seem a bit foreign, but that's precisely what allows one to judge with detachment.

    I sincerely empathize with you. And speaking in the language of my own model, I would say—this is the very "ontological debt" that you, along with my parents, are paying for the system whose values ​​you accepted. Perhaps this sounds a little harsh, but if you return to my previous posts, you'll understand what I mean. And you know what I mean? This speech is just a quiet, humble voice, seemingly foreign. But we all feel it the same way. As they say in Russia, "I ate this excrement with a Russian wooden spoon."

    My parents and I often discuss such matters. And I realize that all I can offer them is comfort. The same comfort that each of us finds in this strange science—philosophy.
  • How to weigh an idea?
    Maybe we are among the beautiful-haired people who use the best product for our beautiful hair. Maybe we are against abortion and belong with those who struggle to prevent abortions. Or the new one, maybe we look like a girl but feel like a boy. The point is we are getting our identities by imagining we are members of groups, and some of these groups believe ridiculous things, such as we are told that we have to wear masks because the government wants to control us. Don't get vaccinated because.....? :brow: I am sorry, but we are not seeking truth. We want to be loved and accepted and valued, and that means finding the group that best fits us, and boy, oh boy, can some of these people be radical.Athena

    This is a very important detail in the formation of beliefs. It is precisely this desire to belong to a group, so simple and stubborn, that dictates many of our prejudices (ideas). Rarely is anyone willing to declare something true, despite the community or group to which they belong.

    This has long been a restraining factor and a powerful tool in the hands of "social engineers."

    It stems from the feeling of security that group membership provides. The desire to be understood and included. The notion of a shared identity and the need to fit in. However, the modern world and the internet, as well as large metropolitan areas, have slightly altered this in people. Now you can find like-minded people online. There's no longer any need to know your neighbors or stick together in extended families. The world has become more individual. AI has further exacerbated this: now, even for a heart-to-heart conversation, you don't need to maintain a close relationship with someone. After all, you have a wonderful, flattering companion in your pocket, ready to share your every experience, offer wise advice, and adapt to you in a way no one else has before.

    Echo chambers or global villages. At the same time, despite the new format of society, even the most extreme form of individualism does not provide the mobility to reconsider ideas. It does not refine the cognitive lens to a philosophical degree of purity. Still, this desire to conform to prevailing ideas remains within us, even if the communities that share them are already a figment of our imagination.

    What awaits us next? Deepening relativism and the destruction of old dogmas and the overthrow of "gods"? Or perhaps such a structure is completely unsustainable, and the decayed (due to the lack of a unified ideology) society will be replaced by other, more united ones? One can only guess.

    In exploring this topic earlier, I introduced an additional factor in the evaluation of an idea into my model: intersubjectivity.
    Intersubjectivity is the number of minds in which an idea has been accepted as dogma.

    However, when analyzing the hierarchy of personality, it is not as universal as when analyzing society. Some beliefs (ideas) can even be found only once in a single individual and still guide their actions.

    Therefore, at the current stage, we have four tools for assessing the "weight of an idea":

    1. Universality
    2. Accuracy
    3. Productivity

    4. Intersubjectivity

    But I still think this is insufficient. There must be
    something else.

    How can we build a better hierarchy of thinking?Athena

    A great method for this was suggested by Popper, previously mentioned in this thread:

    As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable.Philosophim

    However, as he later noted, and I agree with him, the method is somewhat clumsy:
    I too have found explaining falsifiability to be 'clunky'.Philosophim

    I propose this approach, although it's more laboratory and philosophical than widespread.

    It's called the "inversion filter." The essence of the method is this: Take any Level 2 statement and flip it backwards. Then, we check to see if the statement becomes more effective.

    For example: take the statement "All bears are kind." We flip it backwards: "All bears are mean." Both resulting statements are false, but that's good—they could both be dogmas. Next, we check which of these two statements would generate more productivity for us if we found ourselves in a forest with wild bears. Based on our knowledge of bears, the latter, of course. From the perspective of someone who knows nothing about bears other than that they are kind—this would at least make us doubt its truth.

    Try it yourself with other Level 2 ideas. What do you come up with?
  • How to weigh an idea?
    Now I'm just guessing, not deducing logically: most likely, the ineffective tool needs to be discarded quickly (not everyone will experience this behavior; some will become stupefied and frustrated). It's also necessary to quickly find a new assessment tool. Another prejudice immediately pops up: "An animal that runs at you and growls is aggressive" (this isn't necessarily true, it's just an example).
    — Astorre

    Agreed. This is more the morality of knowledge and inductions. Whereas the hierarchy of inductions is a rational evaluation, the 'morality' of what should be used in a particular context can be swayed by other potential outcomes such as death.
    Philosophim

    There's a crucial point here that we haven't sufficiently addressed. In real life, it often happens that our ideas, even when confronted with reality and not verified by it, are nonetheless not discarded, but rather strengthened. Let's look at an example.

    Let's say we hold the idea "bears are kind" at Level 2 (this could be for various reasons, but we simply believe it). Then we encounter a reality in which a bear runs toward us aggressively and growls. But the mind refuses to reject the idea "a bear can be aggressive," because you have blind "faith." It seems like it's simply the wrong bear. Or perhaps your enemies sent it to undermine your beliefs.

    In psychology, this is called "confirmation bias"—a type of cognitive distortion.

    What must happen for a person to begin to re-evaluate their Level 2 ideas in line with reality (Level 3 facts)? Maybe the bear should bite the bearer of the idea or someone close to them? In real life, things can be more complicated.

    When a person (or society) refuses to accept refutation, they begin to expend colossal energy maintaining their idea.

    You need to come up with thousands of interpretations to justify why the "kind bear" just bit off someone's leg.

    You need to censor those who witnessed the bite.

    You need to convince yourself that the wound is a "form of hug."

    As you understand, we're not talking about wild animals here. The bear example is used as an illustrative example.

    So, what's clear at this stage of the research is that if you're stuck within a belief paradigm, it's quite difficult to break out of it. On the other hand, if you constantly react to any noise that contradicts the paradigm in small ways, then nothing good will come of that either. After all, a person is shaped by the ideas they accept on faith (if I'm not mistaken, the author of this was A.P. Chekhov).

    Nietzsche criticizes such dogmatism. He suggests becoming "who you are." He argues that there are no facts, only interpretations. Therefore, they must be verified and independently understood.

    However, this isn't always the case. Not everyone is ready for this.

    In this regard, my question is not "what should I do?" or "what's the right thing to do?", but rather, how does this mechanism work? What motivates a person to reconsider their views or defend their ideas to the end, even to the death?
  • How to weigh an idea?
    Those over 65 are more likely to have lost their sense of wonder and be more grounded in empirical information.Athena

    I have great respect for your age and really enjoy your comments on this forum. They always convey a sensitive nature, tempered by a strong sense of self-control and self-discipline. That's why I'd like to elaborate a bit on what I'm writing here.

    So, I'm not going to claim anything, but it certainly seems that everyone has a certain hierarchy of ideas. When making decisions, most of us would rather be guided by what we accept as fact than by what's written in the tabloids or on a fence (though this isn't necessarily true in all cases).

    But what do we accept as fact? I'll give you a real-life example from history. Before the modern heliocentric model of the solar system, there was a geocentric model (the Ptolemaic model). People thought the sun revolved around the earth. The astronomy of that time accepted this as fact. Astronomers calculated the motion of the stars based on the earth being at the center. And you know, they were quite successful at this. Calendars were compiled and lunar cycles were calculated using this model.

    However, due to the retrograde motion of the planets (natural to the heliocentric model), the geocentric model constantly required the addition of epicycles (circles within circles).

    By the 15th and 16th centuries, there were already about 80 of these epicycles. Developing navigation and trade demanded incredible precision from astronomy to stay on track. But the existing model had become so cluttered that it required incredible calculation efforts.

    Nevertheless, everyone liked it, and the church accepted this model as the truth. Geocentrism was the truth. Just imagine that. From within this model, it was impossible to revise it until Copernicus came along and said, "What if...?" He went beyond what was generally accepted as fact. How difficult it was for him and his followers to revise geocentrism. But it was revised.

    Today, we look upon people who believe the Earth is flat, or upon geocentrists, as cranks. The same applies to adherents of other "facts" considered true in earlier times.

    Imagine that perhaps our descendants will look upon us the same way in 300-500 years.

    That is, everything we scientifically verify, compare with logic, and study factually will perhaps seem bizarre to our descendants.

    Hence, I conclude that what we call "facts" may be nothing more than a trick of our minds.

    Based on this reasoning, I constructed and proposed the model at the beginning of this post. I think you'll find it interesting to reread it.
  • How to weigh an idea?


    Thanks for the link, I've read it. I'm currently very busy developing this model and want to take a break from both ontology and ethics. I'll return to your text later. If you're interested in these issues right now, I suggest you read my first major work on ontology, one chapter of which I posted on this forum.

    https://thephilosophyforum.com/discussion/16103/language-of-philosophy-the-problem-of-understanding-being

    As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable.Philosophim

    Yes, I'm familiar with Popper's ideas. I generally like modern postpositivism. I considered incorporating falsifiability into my model, and frankly, it would have benefited from it in its rigor. However, I deliberately avoided it. I'll explain why.

    The point is, I was experimenting. I tried explaining Popper's "falsifiability" to three acquaintances with bachelor's degrees in their respective fields. This was because I wanted to help them understand their personal (in my opinion, unjustifiably trusting) attitudes toward numerology or astrology. They are accomplished experts in their fields, and when meeting someone, they always remember to ask about their zodiac sign. I spent several lunches explaining Popper's approach, and they even absorbed the material. However, within a few days, they discarded this tool for assessing scientific validity as unsuitable for them, preferring astrology.

    Well, then. I wasn't upset, but apparently falsifiability isn't a standard criterion for evaluating a statement for the average person.

    Ideas float around in people's heads and are accepted as true or false, mostly without regard for Popper's approach (even though this assertion of mine is partly speculative, I found it to be quite reasonable).

    So Number 2 should be the marriage of empirics and deduction, and models should include deductions. Finally, I would also include that axioms are also empirically tested. Other than that I think its good!Philosophim

    Frankly, this was the intended idea; I simply chose a very brief conceptual presentation for the forum so I could consult with you about the general idea.

    In fact, what interests me most is the "dynamics of ideas": how they leap from one level to another, what needs to be done to achieve this, what conditions must be met. And most importantly, how ideas accumulate "ontological debt," which ultimately leads to their collapse.

    Earlier in the correspondence, in response to one of the questions, I analyzed in detail the old feminist slogan "all people are sisters." This pure speculation, elevated to the second level by an act of will, generated so many consequences for reality that it continues to infect new minds. Nevertheless, having accumulated its "ontological debt" (due to its inconsistency with reality), this idea ultimately collapsed.

    Using my proposed approach, it's also possible to predict the fate of other interesting ideas, such as "inclusivity," "transgender," and so on. (Frankly, I'm deliberately avoiding discussion of these ideas, as in some societies these ideas even outshine the idea of ​​"freedom of speech.")

    Of course, until a more or less coherent mathematical model is attached to the model, my thoughts will seem like the ramblings of a madman.

    I don't yet know what should be published publicly and what's best left for labs.

    At the same time, thank you so much for your feedback and your approach, which I really liked!

    This topic hasn't generated much interest on this forum, but I'll try to expand on the content once I've refined my ideas into something worthwhile. I'll keep you posted.
  • How to weigh an idea?
    The reason why the morality of knowledge is so hard to peg down is because we have to determine value. In most situations, the value of your own life would be of a higher worth than risks for little reward. But what is a 'little' reward. If I had a 90% chance of not being eaten by a bear, and someone said they would pay me 10 million dollars, is it worth the risk? 100 million? 1 billion? What's the value of your life in that instance?Philosophim

    This is a very important point. You see, it seems to me that you analyze and evaluate facts and events rationally. (If only because an irrational person couldn't possibly write a paper like yours.)

    However, and this often happens in life, people are willing to calmly face death, even when the probability of death is 90% or higher. And absolutely free of charge. It is only necessary to have certain prejudices (beliefs, ideas, identities)

    I think the desire to understand why things happen the way they do prompted me to begin research in this area. An important clarification: I'm not trying to remake anyone by showing humanity how to live correctly. My goal is to provide a tool for description.

    What's surprising is that such examples are also found in the animal kingdom. For example, mothers of many species (I've observed this especially in mammals) readily sacrifice their lives to preserve their offspring. Of course, animals don't have the complex minds capable of abstraction that humans have. Perhaps what a mother does in the wild is instinctive (although such behavior isn't observed in all animals); perhaps it's intelligent to some degree; Perhaps explainable by evolutionary theory or other approaches. I don't think anyone can seriously claim to know the truth on this matter.

    However, similar behavior is common among Humans. Which is good, since we ourselves are representatives of this species, with the appropriate degree of abstraction, it can be described.

    What is this? The influence of ideas, God, biology, evolution? Or perhaps aliens =)?

    For now, I've settled on the mechanism I proposed at the beginning, with subsequent refinements. It's not the truth, but a descriptive tool. What are your thoughts on this?
  • How to weigh an idea?


    I uploaded our conversation to the AI ​​(Grock) and asked him to tell me how he would act. Here's the response he received:

    How AI (LLMs) Process Conflicts of Ideas/Information
    Unlike the human brain, where conflicting biases can lead to paralysis (as in your bear example: fabled "kindness" vs. empirical "aggression"), AI doesn't experience emotions or "paralysis"—we generate responses based on patterns in training data. However, we don't always "instantly discard" low-level ideas, as Astorre suggests. Instead:

    Conflict Detection: LLMs are quite good at recognizing the presence of contradictions between internal knowledge (parametric knowledge encoded in the model) and external data (from the prompt or context). For example, if a prompt contains a fact that contradicts what the model "knows" from training, we can identify it with high accuracy (F1 score up to 0.88 with Chain-of-Thought prompting). However, there are problems: low recall (the model tends to ignore conflicts, declaring "no contradiction") and domain dependence (better in history, worse in science).
    Weighting and Resolution: We don't use a fixed scale like yours (levels 1–5), but we rank information according to criteria similar to yours:
    Accuracy (verifiability): We evaluate based on the credibility of sources (e.g., fresh data > outdated), context, and internal consistency. In the event of a conflict, the model can favor one side without justification, relying on internal knowledge.
    Generality (scope): LLMs consider how broadly applicable an idea is through attention mechanisms—focusing on relevant parts of the context. Productivity (generative power): We generate distinct answers for different viewpoints, but this requires special prompting (e.g., "generate two answers: one based on context, one on knowledge"). Without this, the model may be biased toward the majority view from the training data.
    Approaches proposed in studies for improvement:
    Three-step process (from one paper): 1) Elicit knowledge (extract parametric knowledge); 2) Break down context (break down into segments and check); 3) Consolidate (combine and classify conflicts). This is similar to your idea of ​​sorting ideas by level—fine-grained analysis increases accuracy to 0.802 F1 in tests.
    Alternative: Generate two answers (context-based and knowledge-based), compare discrepancies, resolve prioritizing context. Experiments on datasets like WhoQA show improvements, but LLMs still struggle with pinpointing exact segments (F1 ~0.62).

    Comparison with human behavior (your example with turtles and a bear): In stressful scenarios (such as an encounter with a bear), humans react in a variety of ways (freezing, aggression, submission—like turtles), due to the plasticity of ideas (according to Philosophy). AI doesn't "react" emotionally, but in simulations (for example, in decision-making tasks), it can "paralyze"—generating incoherent output or falling back to a default bias. Research shows amplified cognitive biases in moral decisions with conflicts. To "organize the mind" (as you suggest), techniques like abstract argumentation or conflict-aware training are used, bringing AI closer to your prescriptive lens.

    Result: Not everything, but there is potential.
    AI doesn't "weight" everything automatically and perfectly—we depend on training data, prompting, and are not always transparent (often favoring evidence without explanation). But with improvements (CoT, fine-tuning), we can get closer to your model: detecting, ranking, and generating productive outcomes. In my case (Grok by xAI), I use reasoning steps to resolve conflicts to avoid "paralysis"—for example, in this answer, I first looked for data to weigh the information. Using your scale, the idea "AI instantly resolves conflicts" is level 4 (model/interpretation, high generality, but low accuracy in practice).
    If you'd like, I can simulate how I (as an AI) would "weight" a specific conflict of ideas on your scale, or I can search for more examples from 2025.

    Based on these statements, fortunately these calculators have not yet learned to rewrite themselves, like humans =)
  • How to weigh an idea?
    For example: if you've never seen a bear in real life, but know from fairy tales that bears are shaggy creatures with round ears, kindhearted and honey-loving, but then, upon encountering one in the forest, you discover the bear is running toward you and growling, the speed at which you revise your presets is directly linked to your survival.
    — Astorre

    In this context, yes. If you could observe the bear in a zoo safely behind a cage, then you could take more time to truly explore the possibilities that the bear is everything the tales said they were, and (in another world) realize that the growl is actually a signal of affection and friendliness. In the case that growl meant what it does in our world, your quick judgement in the wild would save your life.
    Philosophim

    There's an important point here that's not immediately obvious, but it's of great significance.

    Let's think back to this situation. So, you encounter a bear in the forest. You have a "prejudice," or "idea," or "identity" about the bear—it's kind. (I'll use Gadamer's "prejudice" in this post.) But you also have other prejudices (perhaps unrelated to bears, but to animals in general)—animals can be aggressive.

    As the bear approaches you, your prejudice about bears' kindness is immediately shattered due to its inconsistency with reality—for kind bears don't usually run at you with a roar. What happens in your consciousness at that moment? There's no time to re-evaluate your prejudices and re-experience reality phenomenologically.

    Now I'm just guessing, not deducing logically: most likely, the ineffective tool needs to be discarded quickly (not everyone will experience this behavior; some will become stupefied and frustrated). It's also necessary to quickly find a new assessment tool. Another prejudice immediately pops up: "An animal that runs at you and growls is aggressive" (this isn't necessarily true, it's just an example).

    So, you've encountered a conflict of prejudices. It would be great if you had all these prejudices sorted out in advance, according to scales in the depths of your mind. Let's say, according to the three scales I suggested. Then the prejudice about bears being kind would be at level 4 (consistent with fairy tales), and the prejudice about animals being aggressive would be at level 3 (an empirical fact). If you acted like an AI, no conflict would arise: the lower-level prejudice is instantly discarded, and you process information at a more basic level.

    But you're not an AI; your ideas aren't balanced. Something aggressive is rushing at you, and you don't know which prejudice to choose. You become paralyzed.

    In your paper, you write:

    What if I have two conflicting memories? Imagine I have a distinctive knowledge conflict with two separate memories of hooves. I will call them memory A and B, respectively. I must decide which memory I want to use before applying it to reality. Perhaps in memory A, it is essential that a hoof is curved at the top, while in memory B, it is essential that a hoof is pointed at the top. I can decide to use either memory A or B without contradiction, but not apply both memories A and B at the same time. I can, however, decide to apply memory A for one second, then apply believe memory B one second later. Such a state is called “confusion” or “thinking.” At the symbolic, distinctive level. Once I decide to applicably believe either memory A or B, I can then attempt to deductively apply that belief. My distinct experience of the hoof will either deny memory A, memory B, or both. If I have a memory of A and B for “hoof” that both retain validity when applied, then they are either synonyms or one subsumes the other.Philosophim

    I agree with this when the conditions are "laboratory" and you're not in any danger. But here, in the moment when reality has challenged you.

    Sadly, my experience tells me that this can happen differently for each subject. I'd like to share my observations in the wild. This example is very colorful.

    In the summer, I went with my children to a natural habitat of (land) tortoises. Tortoises were simply crawling around on the steppe. I walked up and picked one up. It instantly retreated into its shell, retracted all its limbs, and didn't move. It seemed even its heart was still. It stops. Then I pick up another turtle. The second one behaves aggressively, tries to wriggle away, hisses at me, and tries to bite. Then I pick up a third turtle. It immediately defecates and acts somewhat sluggish. It doesn't hide or wriggle away. It seems resigned to its fate (that's a joke, of course).

    It's amazing that these organisms aren't even mammals yet. And yet their behavior is so diverse. What can we say about a human encounter with a bear, a sheep, or another human? We can't even imagine the images that pop up in their heads, how they're arranged.

    In your work, you say: "In calm conditions," this is how the mind works. And I really like your model, especially since I've even started using it myself.
    My model suggests that it would be extremely effective to also bring order to your mind.

    This is where our differences are clear: you're more of a description, while I'm more of a proposal, a lens that could be effective for Architects. Of course, I'm not claiming to be the absolute truth.

    Could you elaborate on how your model would explain the mental processes in the examples given?
  • How to weigh an idea?


    No. This model claims somewhat greater explanatory power for the reality constructed in the human mind.

    To put it briefly: imagine everything you know. What drives you, why you choose one solution or another, why your thoughts are directed in one direction and not another, how you can accept or reject something. This model also claims to explain social processes in groups, communities, states, etc.

    For example, you want to instill ideals within yourself, your family, the company you work for, or the consumers of your product. You can write to me and we can think together about how this can be done =)
  • How to weigh an idea?


    I've begun a detailed study of your work, and I'd like to ask a question on this topic, as it's related. Please correct me if I've misinterpreted it.

    You propose a foundation—"Discrete Experience"—a single capacity that cannot be denied without self-refutation. This is quite succinct, given other approaches by rationalist epistemologists of different eras. If you allow me, I'll give my own definition, as I understand it: This is the act of arbitrarily selecting and creating identities (separate "objects" in experience).

    Identity acquired through this mechanism is an elementary particle of knowledge, according to your model.

    After acquiring an "identity," a person, when confronted with similar images in life, constantly re-examines the validity (validity, not truth) of this identity.

    From this, as I understand it, it follows that the "usefulness" and "validity" of an identity are far more important than its "truth."

    The model I propose does roughly the same thing: identity, distilled into a proposition (what I call an idea), is weighted not by hypothetical truth, but by three criteria: universality, precision, and productivity. (In later editions, I also added "intersubjectivity" as a multiplier.)

    So, in your work, you introduce that indivisible unit, developed through discrete experience—identity. All subsequent mental constructs begin with it. There is no "identity" in my model. Logically, it would be correct to place it below the level of "speculation."

    Next. According to your model, by comparing the "identity" "recorded" in the mind with reality (when they collide), a person constantly tests this "identity" for functionality. And this plasticity (rather than fossilization) of identities and the ease of their revision ensure the viability of the species. For example: if you've never seen a bear in real life, but know from fairy tales that bears are shaggy creatures with round ears, kindhearted and honey-loving, but then, upon encountering one in the forest, you discover the bear is running toward you and growling, the speed at which you revise your presets is directly linked to your survival. This is very important and suggests that when reality is lenient and doesn't challenge your identities, your life can unfold like a fairy tale. And constantly challenging your presets teaches you to be more flexible. This conclusion, drawn directly from your model, is very useful to me. On the one hand, it explains developmental stagnation, and on the other, it suggests tools for encouraging the subject to reconsider their "identities." This also suggests that before suggesting an "idea" to someone else, it's best to test it yourself multiple times, otherwise it could lead to pain (from facing reality).

    For now, I'll continue reading and share my thoughts with you as I go.