Maybe we are among the beautiful-haired people who use the best product for our beautiful hair. Maybe we are against abortion and belong with those who struggle to prevent abortions. Or the new one, maybe we look like a girl but feel like a boy. The point is we are getting our identities by imagining we are members of groups, and some of these groups believe ridiculous things, such as we are told that we have to wear masks because the government wants to control us. Don't get vaccinated because.....? :brow: I am sorry, but we are not seeking truth. We want to be loved and accepted and valued, and that means finding the group that best fits us, and boy, oh boy, can some of these people be radical. — Athena
How can we build a better hierarchy of thinking? — Athena
As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable. — Philosophim
I too have found explaining falsifiability to be 'clunky'. — Philosophim
Now I'm just guessing, not deducing logically: most likely, the ineffective tool needs to be discarded quickly (not everyone will experience this behavior; some will become stupefied and frustrated). It's also necessary to quickly find a new assessment tool. Another prejudice immediately pops up: "An animal that runs at you and growls is aggressive" (this isn't necessarily true, it's just an example).
— Astorre
Agreed. This is more the morality of knowledge and inductions. Whereas the hierarchy of inductions is a rational evaluation, the 'morality' of what should be used in a particular context can be swayed by other potential outcomes such as death. — Philosophim
Those over 65 are more likely to have lost their sense of wonder and be more grounded in empirical information. — Athena
As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable. — Philosophim
So Number 2 should be the marriage of empirics and deduction, and models should include deductions. Finally, I would also include that axioms are also empirically tested. Other than that I think its good! — Philosophim
The reason why the morality of knowledge is so hard to peg down is because we have to determine value. In most situations, the value of your own life would be of a higher worth than risks for little reward. But what is a 'little' reward. If I had a 90% chance of not being eaten by a bear, and someone said they would pay me 10 million dollars, is it worth the risk? 100 million? 1 billion? What's the value of your life in that instance? — Philosophim
How AI (LLMs) Process Conflicts of Ideas/Information
Unlike the human brain, where conflicting biases can lead to paralysis (as in your bear example: fabled "kindness" vs. empirical "aggression"), AI doesn't experience emotions or "paralysis"—we generate responses based on patterns in training data. However, we don't always "instantly discard" low-level ideas, as Astorre suggests. Instead:
Conflict Detection: LLMs are quite good at recognizing the presence of contradictions between internal knowledge (parametric knowledge encoded in the model) and external data (from the prompt or context). For example, if a prompt contains a fact that contradicts what the model "knows" from training, we can identify it with high accuracy (F1 score up to 0.88 with Chain-of-Thought prompting). However, there are problems: low recall (the model tends to ignore conflicts, declaring "no contradiction") and domain dependence (better in history, worse in science).
Weighting and Resolution: We don't use a fixed scale like yours (levels 1–5), but we rank information according to criteria similar to yours:
Accuracy (verifiability): We evaluate based on the credibility of sources (e.g., fresh data > outdated), context, and internal consistency. In the event of a conflict, the model can favor one side without justification, relying on internal knowledge.
Generality (scope): LLMs consider how broadly applicable an idea is through attention mechanisms—focusing on relevant parts of the context. Productivity (generative power): We generate distinct answers for different viewpoints, but this requires special prompting (e.g., "generate two answers: one based on context, one on knowledge"). Without this, the model may be biased toward the majority view from the training data.
Approaches proposed in studies for improvement:
Three-step process (from one paper): 1) Elicit knowledge (extract parametric knowledge); 2) Break down context (break down into segments and check); 3) Consolidate (combine and classify conflicts). This is similar to your idea of sorting ideas by level—fine-grained analysis increases accuracy to 0.802 F1 in tests.
Alternative: Generate two answers (context-based and knowledge-based), compare discrepancies, resolve prioritizing context. Experiments on datasets like WhoQA show improvements, but LLMs still struggle with pinpointing exact segments (F1 ~0.62).
Comparison with human behavior (your example with turtles and a bear): In stressful scenarios (such as an encounter with a bear), humans react in a variety of ways (freezing, aggression, submission—like turtles), due to the plasticity of ideas (according to Philosophy). AI doesn't "react" emotionally, but in simulations (for example, in decision-making tasks), it can "paralyze"—generating incoherent output or falling back to a default bias. Research shows amplified cognitive biases in moral decisions with conflicts. To "organize the mind" (as you suggest), techniques like abstract argumentation or conflict-aware training are used, bringing AI closer to your prescriptive lens.
Result: Not everything, but there is potential.
AI doesn't "weight" everything automatically and perfectly—we depend on training data, prompting, and are not always transparent (often favoring evidence without explanation). But with improvements (CoT, fine-tuning), we can get closer to your model: detecting, ranking, and generating productive outcomes. In my case (Grok by xAI), I use reasoning steps to resolve conflicts to avoid "paralysis"—for example, in this answer, I first looked for data to weigh the information. Using your scale, the idea "AI instantly resolves conflicts" is level 4 (model/interpretation, high generality, but low accuracy in practice).
If you'd like, I can simulate how I (as an AI) would "weight" a specific conflict of ideas on your scale, or I can search for more examples from 2025.
For example: if you've never seen a bear in real life, but know from fairy tales that bears are shaggy creatures with round ears, kindhearted and honey-loving, but then, upon encountering one in the forest, you discover the bear is running toward you and growling, the speed at which you revise your presets is directly linked to your survival.
— Astorre
In this context, yes. If you could observe the bear in a zoo safely behind a cage, then you could take more time to truly explore the possibilities that the bear is everything the tales said they were, and (in another world) realize that the growl is actually a signal of affection and friendliness. In the case that growl meant what it does in our world, your quick judgement in the wild would save your life. — Philosophim
What if I have two conflicting memories? Imagine I have a distinctive knowledge conflict with two separate memories of hooves. I will call them memory A and B, respectively. I must decide which memory I want to use before applying it to reality. Perhaps in memory A, it is essential that a hoof is curved at the top, while in memory B, it is essential that a hoof is pointed at the top. I can decide to use either memory A or B without contradiction, but not apply both memories A and B at the same time. I can, however, decide to apply memory A for one second, then apply believe memory B one second later. Such a state is called “confusion” or “thinking.” At the symbolic, distinctive level. Once I decide to applicably believe either memory A or B, I can then attempt to deductively apply that belief. My distinct experience of the hoof will either deny memory A, memory B, or both. If I have a memory of A and B for “hoof” that both retain validity when applied, then they are either synonyms or one subsumes the other. — Philosophim
They are propositions. Propositions are either true or false. Ideas can be building block of propositions. — Corvus
Ideas are mental image. On their own it has no true or false values. As Hume wrote, ideas can be vivid or faint, strong or weak depending on the type of perception — Corvus
By around 40,000 years ago, our brains had reached their current shape, which involved a reorganization of brain regions, including the parietal lobes and cerebellum, contributing to increased capacities in planning, language and visuospatial integration. It was also around that time that modern humans got the gene microcephalin (MCPH1) by interbreeding with Neanderthals and Denisovans. MCPH1 may influence brain-related traits, causing better performance. Also, a genetic mutation around that time in the NOVA1 gene produced a variant that affects how neurons connect, modifying intelligence and cortical area, especially in language-related regions. — Questioner
↪Astorre If you’re asking whether there’s a way to determine an idea’s value without the involvement of an interpreter, then unfortunately I can’t think of any such method. Not every idea requires an empirical approach, I think, but it still requires an interpreter—whether to provide purely logical reasoning for the proposed “weight” of the idea, or simply to assert that X is an obvious axiom without further proof. — Zebeden
How would this be applied, for instance, on the ideas of the birth of a star and the beginning of life? Taking both as ideas about reality based on reality, they are both very "Niche" with little in common. Or does your new method only apply to certain areas of reality? — Sir2u
I am a retired high school biology teacher, and one of the many things that I told my students is that everything about us survived in us because it gave us some kind of advantage in the environment in which we were living. — Questioner
Or when the Holy Inquisition condemned people to be burned at the stake: surely the inquisitors considered this an act of "love", no?
One thing I've learned (and the hard way, at that) is that religious/spiritual people tend to have vastly different ideas than I about what constitutes "good" and "bad", "love" and "hate", and so on. To the point like we're from different universes, hence my question to you earlier. — baker
What do you mean by "love"?
If you believe that someone deserves to die, to be killed (by you, even), and you spare them, is that an act of "love" on your part? — baker
I'm not aware of any Christian tradition that guarantees hell for all. However, many mainstream Protestant faiths, especially fundamentalist literalists, do seem to embrace a hellfire-and-damnation view. I’ve certainly heard sermons claiming people will go to hell for being gay or for atheism, with warnings of “weeping and gnashing of teeth.” Some might even consider Protestant literalism a heresy (I think David Bentley Hart who I quite like, despite his sometimes being an arrogant shit, holds that view). — Tom Storm
People are quick to equate religion with so many ascetic rules and with earthly-looking power structures. And they equate its spread with earthly tactics of spreading earthly ideologies, including coercion and psychological tricks. But Christianity was always different as it requires freedom to achieve its ends. The core Christian message is that God is trying to bring us to know and love him. There is no such thing as knowing someone or loving someone without their free, honest willingness. This in itself is more universally appealing. — Fire Ologist
If I were of a more scholarly cast I think this is precisely where I would go looking for a coherant model of thought in this space. — Tom Storm
This has led to disability being seen as a gap between what a body is able to do and what it has been historically expected to be able to do, the gap between body and social expectation. — Banno
Is disability no more than an issue of welfare and charity, or should we [url=http:// https://en.wikipedia.org/wiki/Piss_On_Pity ]piss on pity[/url]? — Banno
