Astorre
frank
Astorre
frank
What else? I would note that nomads are best adapted to the unexpected (famine, cold, catastrophes) – the so-called "black swan." Sedentary people, on the other hand, learned to overcome difficulties based on the principle of "nowhere to run." — Astorre
History always tells us that a problem can have several solutions, and the model I propose allows us to consider their pros and cons. — Astorre
Astorre
Corvus
Sir2u
The core of my approach is a system for ranking any “human representations of reality” by its weight across three criteria:
1. Universality — Scope of Applicability
A measure of how broadly a given idea X can functionally regulate different domains.
Universality prevents “Niche Blindness” (when an idea applies only within a narrow area while ignoring others). — Astorre
The nomadic idea of "home" is tied not to the land, but to everyday life, loved ones, and life itself. — Astorre
In the history of the world, it was nomads who managed to build the largest (in terms of size) states, — Astorre
Astorre
↪Astorre If you’re asking whether there’s a way to determine an idea’s value without the involvement of an interpreter, then unfortunately I can’t think of any such method. Not every idea requires an empirical approach, I think, but it still requires an interpreter—whether to provide purely logical reasoning for the proposed “weight” of the idea, or simply to assert that X is an obvious axiom without further proof. — Zebeden
How would this be applied, for instance, on the ideas of the birth of a star and the beginning of life? Taking both as ideas about reality based on reality, they are both very "Niche" with little in common. Or does your new method only apply to certain areas of reality? — Sir2u
Corvus
"all people are sisters." — Astorre
It can be interpreted as "only women are people." — Astorre
Astorre
Astorre
They are propositions. Propositions are either true or false. Ideas can be building block of propositions. — Corvus
Ideas are mental image. On their own it has no true or false values. As Hume wrote, ideas can be vivid or faint, strong or weak depending on the type of perception — Corvus
Philosophim
Astorre
Philosophim
What else I noticed: these are essentially two facets of the same insight, which is becoming increasingly relevant in the era of post-truth, propaganda, and narrative manipulation.
Of course, your work is more substantiated, consistent, and logically sound, whereas I was setting myself somewhat more practical goals. — Astorre
The material is a bit difficult to digest, as I involuntarily, while reading what you wrote, mentally compare it with what I wrote myself. I think it will take me a couple of days to grasp your approach. — Astorre
Astorre
Corvus
The proposed model can help assess whether I really want what I want, or am I being fooled?
This example is also consistent with other cases where people are brainwashed not just by the fake value of a product, but by the fake value of "Values." — Astorre
Alexander Hine
Astorre
Philosophim
You propose a foundation—"Discrete Experience"—a single capacity that cannot be denied without self-refutation. This is quite succinct, given other approaches by rationalist epistemologists of different eras. If you allow me, I'll give my own definition, as I understand it: This is the act of arbitrarily selecting and creating identities (separate "objects" in experience). — Astorre
Identity acquired through this mechanism is an elementary particle of knowledge, according to your model. — Astorre
After acquiring an "identity," a person, when confronted with similar images in life, constantly re-examines the validity (validity, not truth) of this identity. — Astorre
From this, as I understand it, it follows that the "usefulness" and "validity" of an identity are far more important than its "truth." — Astorre
The model I propose does roughly the same thing: identity, distilled into a proposition (what I call an idea), is weighted not by hypothetical truth, but by three criteria: universality, precision, and productivity. (In later editions, I also added "intersubjectivity" as a multiplier.) — Astorre
So, in your work, you introduce that indivisible unit, developed through discrete experience—identity. All subsequent mental constructs begin with it. There is no "identity" in my model. Logically, it would be correct to place it below the level of "speculation." — Astorre
Next. According to your model, by comparing the "identity" "recorded" in the mind with reality (when they collide), a person constantly tests this "identity" for functionality. And this plasticity (rather than fossilization) of identities and the ease of their revision ensure the viability of the species. — Astorre
For example: if you've never seen a bear in real life, but know from fairy tales that bears are shaggy creatures with round ears, kindhearted and honey-loving, but then, upon encountering one in the forest, you discover the bear is running toward you and growling, the speed at which you revise your presets is directly linked to your survival. — Astorre
This is very important and suggests that when reality is lenient and doesn't challenge your identities, your life can unfold like a fairy tale. — Astorre
And constantly challenging your presets teaches you to be more flexible. This conclusion, drawn directly from your model, is very useful to me. On the one hand, it explains developmental stagnation, and on the other, it suggests tools for encouraging the subject to reconsider their "identities." This also suggests that before suggesting an "idea" to someone else, it's best to test it yourself multiple times, otherwise it could lead to pain (from facing reality). — Astorre
Astorre
For example: if you've never seen a bear in real life, but know from fairy tales that bears are shaggy creatures with round ears, kindhearted and honey-loving, but then, upon encountering one in the forest, you discover the bear is running toward you and growling, the speed at which you revise your presets is directly linked to your survival.
— Astorre
In this context, yes. If you could observe the bear in a zoo safely behind a cage, then you could take more time to truly explore the possibilities that the bear is everything the tales said they were, and (in another world) realize that the growl is actually a signal of affection and friendliness. In the case that growl meant what it does in our world, your quick judgement in the wild would save your life. — Philosophim
What if I have two conflicting memories? Imagine I have a distinctive knowledge conflict with two separate memories of hooves. I will call them memory A and B, respectively. I must decide which memory I want to use before applying it to reality. Perhaps in memory A, it is essential that a hoof is curved at the top, while in memory B, it is essential that a hoof is pointed at the top. I can decide to use either memory A or B without contradiction, but not apply both memories A and B at the same time. I can, however, decide to apply memory A for one second, then apply believe memory B one second later. Such a state is called “confusion” or “thinking.” At the symbolic, distinctive level. Once I decide to applicably believe either memory A or B, I can then attempt to deductively apply that belief. My distinct experience of the hoof will either deny memory A, memory B, or both. If I have a memory of A and B for “hoof” that both retain validity when applied, then they are either synonyms or one subsumes the other. — Philosophim
Astorre
How AI (LLMs) Process Conflicts of Ideas/Information
Unlike the human brain, where conflicting biases can lead to paralysis (as in your bear example: fabled "kindness" vs. empirical "aggression"), AI doesn't experience emotions or "paralysis"—we generate responses based on patterns in training data. However, we don't always "instantly discard" low-level ideas, as Astorre suggests. Instead:
Conflict Detection: LLMs are quite good at recognizing the presence of contradictions between internal knowledge (parametric knowledge encoded in the model) and external data (from the prompt or context). For example, if a prompt contains a fact that contradicts what the model "knows" from training, we can identify it with high accuracy (F1 score up to 0.88 with Chain-of-Thought prompting). However, there are problems: low recall (the model tends to ignore conflicts, declaring "no contradiction") and domain dependence (better in history, worse in science).
Weighting and Resolution: We don't use a fixed scale like yours (levels 1–5), but we rank information according to criteria similar to yours:
Accuracy (verifiability): We evaluate based on the credibility of sources (e.g., fresh data > outdated), context, and internal consistency. In the event of a conflict, the model can favor one side without justification, relying on internal knowledge.
Generality (scope): LLMs consider how broadly applicable an idea is through attention mechanisms—focusing on relevant parts of the context. Productivity (generative power): We generate distinct answers for different viewpoints, but this requires special prompting (e.g., "generate two answers: one based on context, one on knowledge"). Without this, the model may be biased toward the majority view from the training data.
Approaches proposed in studies for improvement:
Three-step process (from one paper): 1) Elicit knowledge (extract parametric knowledge); 2) Break down context (break down into segments and check); 3) Consolidate (combine and classify conflicts). This is similar to your idea of sorting ideas by level—fine-grained analysis increases accuracy to 0.802 F1 in tests.
Alternative: Generate two answers (context-based and knowledge-based), compare discrepancies, resolve prioritizing context. Experiments on datasets like WhoQA show improvements, but LLMs still struggle with pinpointing exact segments (F1 ~0.62).
Comparison with human behavior (your example with turtles and a bear): In stressful scenarios (such as an encounter with a bear), humans react in a variety of ways (freezing, aggression, submission—like turtles), due to the plasticity of ideas (according to Philosophy). AI doesn't "react" emotionally, but in simulations (for example, in decision-making tasks), it can "paralyze"—generating incoherent output or falling back to a default bias. Research shows amplified cognitive biases in moral decisions with conflicts. To "organize the mind" (as you suggest), techniques like abstract argumentation or conflict-aware training are used, bringing AI closer to your prescriptive lens.
Result: Not everything, but there is potential.
AI doesn't "weight" everything automatically and perfectly—we depend on training data, prompting, and are not always transparent (often favoring evidence without explanation). But with improvements (CoT, fine-tuning), we can get closer to your model: detecting, ranking, and generating productive outcomes. In my case (Grok by xAI), I use reasoning steps to resolve conflicts to avoid "paralysis"—for example, in this answer, I first looked for data to weigh the information. Using your scale, the idea "AI instantly resolves conflicts" is level 4 (model/interpretation, high generality, but low accuracy in practice).
If you'd like, I can simulate how I (as an AI) would "weight" a specific conflict of ideas on your scale, or I can search for more examples from 2025.
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.