It differs insofar as it performs the task of constraining AI in ways that only make sense if one is dealing with a superintelligence, really. — ToothyMaw
If you live in the US, you know that people are often keenly aware of the laws around defamation and free speech and cynically skirt the boundaries of protected speech on a regular basis. — ToothyMaw
Do unto others as you would have them do unto you, might be problematic as a universal moral imperative — ENOAH
Now on to the practical value, I do not have a Christian lens for seeing life. I like to believe in the New Age, a time of high tech, peace, and the end of tyranny. I believe that is our purpose. I believe we are supposed to learn all we can from geologists, archaeologists, and related sciences and then rethink everything. Our purpose is to create an ideal reality. If we are not capable of that, then how could we have a heaven? — Athena
Unearthing the existential problems of life is the first task of philosophy and inquisition is a choice of methodology and means. The power of philosophy is the both the exhibition of what is in the now and the participation in the particular, so that codified meaning gains substance in instinctually driven life. Incidentally sharing the same form and utility as the impact of global culture. — Alexander Hine
Would it?
Are you saying that competition for business would also disappear?
You just don't hand out money -- like during Covid. Yes, that's a good example of just handing out money. Let's use that as a lesson. — L'éléphant
And here's another thing to illustrate this dynamic: let's take a small town. Let's say, for example, that bread is produced by drones or fully automated systems. Then investing in such machines, owning real estate for production, or developing a business will only be profitable if this creates better conditions than simply receiving free money from the state. Otherwise, you can do nothing—the benefits will come anyway. This means that bread (or any other commodity) will be quite expensive relative to the "free allowance" because entrepreneurs or capital owners will demand high margins to motivate themselves to take risks and make efforts. Ultimately, the basic income may only cover the bare minimum, while real prices will rise, eroding purchasing power. This isn't pure inflation due to shortages, but rather a market distortion due to a lack of incentives for production and competition. — Astorre
I am grateful that I don't have to do my laundry by hand, beating it on rocks in the river. — BC
How does this impact Christianity in light of the OP? Do we have sufficient reason to think that Jesus was God and died for our sins? Personally, my conclusion is no. But I have never thought that an old book asserting something is, in itself, a reliable tool in the first place, regardless of what can be proven historically. — Tom Storm
I tend to think that those who derive satisfaction from rationality or whatever else, do so because it ultimately appeals to them emotionally. Most of our beliefs are likely arrived at because they align with our feelings, with rational explanations often supplied afterward as ad hoc justifications. — Tom Storm
Again depends. I think for many atheists it isn’t really a conviction. A conviction of what, exactly? For many, atheism is simply a lack of belief in a god. Contemporary atheists are more likely to say they don’t believe in gods rather than claim that there are no gods. How can one be “convinced” of a lack of belief? You either believe or you don’t. What you may be is "unconvinced" that there are gods. I think it's well understood that there are hard atheists an soft atheists and atheists who are untheorised. — Tom Storm
Christianity stands or falls on a single historical claim: that Jesus of Nazareth rose bodily from the dead. I want to keep this thread narrow. — Sam26
To perceive something is to be in unmediated contact with it. I take that to be a conceptual truth that all involved in this debate will agree on. — Clarendon
Maybe we are among the beautiful-haired people who use the best product for our beautiful hair. Maybe we are against abortion and belong with those who struggle to prevent abortions. Or the new one, maybe we look like a girl but feel like a boy. The point is we are getting our identities by imagining we are members of groups, and some of these groups believe ridiculous things, such as we are told that we have to wear masks because the government wants to control us. Don't get vaccinated because.....? :brow: I am sorry, but we are not seeking truth. We want to be loved and accepted and valued, and that means finding the group that best fits us, and boy, oh boy, can some of these people be radical. — Athena
How can we build a better hierarchy of thinking? — Athena
As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable. — Philosophim
I too have found explaining falsifiability to be 'clunky'. — Philosophim
Now I'm just guessing, not deducing logically: most likely, the ineffective tool needs to be discarded quickly (not everyone will experience this behavior; some will become stupefied and frustrated). It's also necessary to quickly find a new assessment tool. Another prejudice immediately pops up: "An animal that runs at you and growls is aggressive" (this isn't necessarily true, it's just an example).
— Astorre
Agreed. This is more the morality of knowledge and inductions. Whereas the hierarchy of inductions is a rational evaluation, the 'morality' of what should be used in a particular context can be swayed by other potential outcomes such as death. — Philosophim
Those over 65 are more likely to have lost their sense of wonder and be more grounded in empirical information. — Athena
As for your first 3, I would add one more: Testability. We often easily come up with concepts that are fine constructs of logic and deduction, yet utterly untestable. Testability includes 'falsification', which is not, "That its false," but that we can test it against a state in which it would be false, yet prove that its still true. For example, "This shirt is green." Its falsifiable by it either not being a shirt, or another color besides green. A unicorn which cannot be sensed due to magic is not falsifiable. Since we cannot sense it, there can be no testable scenario in which the existence of a unicorn is falsifiable. — Philosophim
So Number 2 should be the marriage of empirics and deduction, and models should include deductions. Finally, I would also include that axioms are also empirically tested. Other than that I think its good! — Philosophim
The reason why the morality of knowledge is so hard to peg down is because we have to determine value. In most situations, the value of your own life would be of a higher worth than risks for little reward. But what is a 'little' reward. If I had a 90% chance of not being eaten by a bear, and someone said they would pay me 10 million dollars, is it worth the risk? 100 million? 1 billion? What's the value of your life in that instance? — Philosophim
How AI (LLMs) Process Conflicts of Ideas/Information
Unlike the human brain, where conflicting biases can lead to paralysis (as in your bear example: fabled "kindness" vs. empirical "aggression"), AI doesn't experience emotions or "paralysis"—we generate responses based on patterns in training data. However, we don't always "instantly discard" low-level ideas, as Astorre suggests. Instead:
Conflict Detection: LLMs are quite good at recognizing the presence of contradictions between internal knowledge (parametric knowledge encoded in the model) and external data (from the prompt or context). For example, if a prompt contains a fact that contradicts what the model "knows" from training, we can identify it with high accuracy (F1 score up to 0.88 with Chain-of-Thought prompting). However, there are problems: low recall (the model tends to ignore conflicts, declaring "no contradiction") and domain dependence (better in history, worse in science).
Weighting and Resolution: We don't use a fixed scale like yours (levels 1–5), but we rank information according to criteria similar to yours:
Accuracy (verifiability): We evaluate based on the credibility of sources (e.g., fresh data > outdated), context, and internal consistency. In the event of a conflict, the model can favor one side without justification, relying on internal knowledge.
Generality (scope): LLMs consider how broadly applicable an idea is through attention mechanisms—focusing on relevant parts of the context. Productivity (generative power): We generate distinct answers for different viewpoints, but this requires special prompting (e.g., "generate two answers: one based on context, one on knowledge"). Without this, the model may be biased toward the majority view from the training data.
Approaches proposed in studies for improvement:
Three-step process (from one paper): 1) Elicit knowledge (extract parametric knowledge); 2) Break down context (break down into segments and check); 3) Consolidate (combine and classify conflicts). This is similar to your idea of sorting ideas by level—fine-grained analysis increases accuracy to 0.802 F1 in tests.
Alternative: Generate two answers (context-based and knowledge-based), compare discrepancies, resolve prioritizing context. Experiments on datasets like WhoQA show improvements, but LLMs still struggle with pinpointing exact segments (F1 ~0.62).
Comparison with human behavior (your example with turtles and a bear): In stressful scenarios (such as an encounter with a bear), humans react in a variety of ways (freezing, aggression, submission—like turtles), due to the plasticity of ideas (according to Philosophy). AI doesn't "react" emotionally, but in simulations (for example, in decision-making tasks), it can "paralyze"—generating incoherent output or falling back to a default bias. Research shows amplified cognitive biases in moral decisions with conflicts. To "organize the mind" (as you suggest), techniques like abstract argumentation or conflict-aware training are used, bringing AI closer to your prescriptive lens.
Result: Not everything, but there is potential.
AI doesn't "weight" everything automatically and perfectly—we depend on training data, prompting, and are not always transparent (often favoring evidence without explanation). But with improvements (CoT, fine-tuning), we can get closer to your model: detecting, ranking, and generating productive outcomes. In my case (Grok by xAI), I use reasoning steps to resolve conflicts to avoid "paralysis"—for example, in this answer, I first looked for data to weigh the information. Using your scale, the idea "AI instantly resolves conflicts" is level 4 (model/interpretation, high generality, but low accuracy in practice).
If you'd like, I can simulate how I (as an AI) would "weight" a specific conflict of ideas on your scale, or I can search for more examples from 2025.
For example: if you've never seen a bear in real life, but know from fairy tales that bears are shaggy creatures with round ears, kindhearted and honey-loving, but then, upon encountering one in the forest, you discover the bear is running toward you and growling, the speed at which you revise your presets is directly linked to your survival.
— Astorre
In this context, yes. If you could observe the bear in a zoo safely behind a cage, then you could take more time to truly explore the possibilities that the bear is everything the tales said they were, and (in another world) realize that the growl is actually a signal of affection and friendliness. In the case that growl meant what it does in our world, your quick judgement in the wild would save your life. — Philosophim
What if I have two conflicting memories? Imagine I have a distinctive knowledge conflict with two separate memories of hooves. I will call them memory A and B, respectively. I must decide which memory I want to use before applying it to reality. Perhaps in memory A, it is essential that a hoof is curved at the top, while in memory B, it is essential that a hoof is pointed at the top. I can decide to use either memory A or B without contradiction, but not apply both memories A and B at the same time. I can, however, decide to apply memory A for one second, then apply believe memory B one second later. Such a state is called “confusion” or “thinking.” At the symbolic, distinctive level. Once I decide to applicably believe either memory A or B, I can then attempt to deductively apply that belief. My distinct experience of the hoof will either deny memory A, memory B, or both. If I have a memory of A and B for “hoof” that both retain validity when applied, then they are either synonyms or one subsumes the other. — Philosophim
