Maybe we are among the beautiful-haired people who use the best product for our beautiful hair. Maybe we are against abortion and belong with those who struggle to prevent abortions. Or the new one, maybe we look like a girl but feel like a boy. The point is we are getting our identities by imagining we are members of groups, and some of these groups believe ridiculous things, such as we are told that we have to wear masks because the government wants to control us. Don't get vaccinated because.....? :brow: I am sorry, but we are not seeking truth. We want to be loved and accepted and valued, and that means finding the group that best fits us, and boy, oh boy, can some of these people be radical. — Athena
This is a very important detail in the formation of beliefs. It is precisely this desire to belong to a group, so simple and stubborn, that dictates many of our prejudices (ideas). Rarely is anyone willing to declare something true, despite the community or group to which they belong.
This has long been a restraining factor and a powerful tool in the hands of "social engineers."
It stems from the feeling of security that group membership provides. The desire to be understood and included. The notion of a shared identity and the need to fit in. However, the modern world and the internet, as well as large metropolitan areas, have slightly altered this in people. Now you can find like-minded people online. There's no longer any need to know your neighbors or stick together in extended families. The world has become more individual. AI has further exacerbated this: now, even for a heart-to-heart conversation, you don't need to maintain a close relationship with someone. After all, you have a wonderful, flattering companion in your pocket, ready to share your every experience, offer wise advice, and adapt to you in a way no one else has before.
Echo chambers or global villages. At the same time, despite the new format of society, even the most extreme form of individualism does not provide the mobility to reconsider ideas. It does not refine the cognitive lens to a philosophical degree of purity. Still, this desire to conform to prevailing ideas remains within us, even if the communities that share them are already a figment of our imagination.
What awaits us next? Deepening relativism and the destruction of old dogmas and the overthrow of "gods"? Or perhaps such a structure is completely unsustainable, and the decayed (due to the lack of a unified ideology) society will be replaced by other, more united ones? One can only guess.
In exploring this topic earlier, I introduced an additional factor in the evaluation of an idea into my model: intersubjectivity.
Intersubjectivity is the number of minds in which an idea has been accepted as dogma.
However, when analyzing the hierarchy of personality, it is not as universal as when analyzing society. Some beliefs (ideas) can even be found only once in a single individual and still guide their actions.
Therefore, at the current stage, we have four tools for assessing the "weight of an idea":
1. Universality
2. Accuracy
3. Productivity
4. Intersubjectivity
But I still think this is insufficient. There must be
something else.
How can we build a better hierarchy of thinking? — Athena
A great method for this was suggested by Popper, previously mentioned in this thread:
As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable. — Philosophim
However, as he later noted, and I agree with him, the method is somewhat clumsy:
I too have found explaining falsifiability to be 'clunky'. — Philosophim
I propose this approach, although it's more laboratory and philosophical than widespread.
It's called the "inversion filter." The essence of the method is this: Take any Level 2 statement and flip it backwards. Then, we check to see if the statement becomes more effective.
For example: take the statement "All bears are kind." We flip it backwards: "All bears are mean." Both resulting statements are false, but that's good—they could both be dogmas. Next, we check which of these two statements would generate more productivity for us if we found ourselves in a forest with wild bears. Based on our knowledge of bears, the latter, of course. From the perspective of someone who knows nothing about bears other than that they are kind—this would at least make us doubt its truth.
Try it yourself with other Level 2 ideas. What do you come up with?