• apokrisis
    7.3k
    Do you not apprehend the necessity of a "being" which applies these constraints?Metaphysician Undercover

    If I visited another planet and found all these ruins and artefacts, I would feel they could only be explained as machinery constructed by a race of intelligent beings. That would be a logical inference.

    But If I visited another planet and found only mountains and rivers, plate tectonics and dissipative flows, then I would conclude something else. An absence of intelligent creators. Only the presence of self organising entropy-driven physical structure.

    This demonstrates very clearly that you do not understand final cause, nor do you understand freewill.Metaphysician Undercover

    I simply don’t accept your own view on them. That’s different.

    However, I think that Peirce had very little to say about either of these, and you are just projecting your misunderstanding of final cause and free will onto Peirce's metaphysics.Metaphysician Undercover

    He emphasised the role of habit instead. Constraints on action that explain both human psychology, hence “freewill”, and cosmology if the lawful regularity of nature is best understood as a habit that develops.

    So it is usually said he was very Aristotelean on finality. But he also wanted to show that any “creating mind”, was part of the world it was making, not sitting on a throne outside it.

    But the fact of the matter is that the existence of artificial things is much more accurately described by the philosophy of final cause and freewill, and naturalism can only attempt to make itself consistent with final cause by misrepresenting final cause.Metaphysician Undercover

    So we agree there for quite different reasons. :grin:

    Furthermore, I never described any "collection of instants", nor did Newton rely on any such conception.Metaphysician Undercover

    OK I accept Newton’s arguments were more complex. He had the usual wrestle over whether reality was at base continuous or discrete. Were his infinitesimals/fluxions always still a duration of did they achieve the limit and become points on a line?

    But his insistence on time as an external absolute was how he could also insist that all the Universe shared the same instant. Simultaneity.

    And note that the argument I’m making seeks to resolve the continuous-discrete debate via the logic of vagueness. Neither is seen as basic. Instead both are opposing limits on possibility. And this is the relativistic view. Continuity and discreteness are never completely separated in nature. But a relative degree of separation is what can develop. You can arrive at a classical state that looks Newtonian. Time as (almost) a continuous duration while also being (almost) infinitely divisible into its instants.

    That is the issue which modern physics faces, it does not respect the substantial difference between past and future.Metaphysician Undercover

    That is where incorporating a thermodynamic arrow of time into physics makes a difference. It breaks that symmetry which comes from treating time as a number line-like dimension - a series of points that you could equally read backwards or forwards.

    Once time is understood in terms of a thermal slope, an entropic finality, then the past becomes different from the future.

    What has happened is the past as it now constrains what is possible as the future. Once a ball rolls halfway down the slope, that is half of what it could do - or even had to do, given its finality. It’s further potential for action is limited by what is already done.
  • apokrisis
    7.3k
    A graph of the reciprocal function might help give a better visual representation of my argument about the start of time.

    function-reciprocal.svg

    So note that the reciprocal function describes a hyperbola. And we can understand this as representing the complementary quantum axes that define uncertainty (indeterminism, vagueness). Let's call the x axis momentum, the y axis location. In the formalism, the two values are reciprocal. Greater certainty about one direction increases the uncertainty about the other. The two aspects of reality are tied by this reciprocal balancing act.

    Now think of this parabola as representing the Universe in time - its evolution from a Planck scale beginning where its location and momentum values are "the same size". In an exact balance at their "smallest scale". The point on the graph where y = 1; x = 1.

    Note that this is a value of unit 1. That is where things crisply start. It is not 0 - the origin point.

    Now if you follow the evolution of the parabola along its two arms you can seen in the infinite future, the division between momentum and location becomes effectively complete. The curves are asymptotic so eventually kiss the x and y axes. They seem to become the x and y axes after infinite time.

    And then the catch. If you are an observer seeing this world way down the line where you believe the x and y axis describe the situation, then retrospectively you will project the x and y axis back to the point where they meet at the origin.

    Hey presto, you just invented the problem of how something came from nothing, how there must be a first moment, first cause, because everything has to have started counting its way up from that common origin point marked on the graph.

    A backwards projection of two orthogonal lines fails to read that it is really tracing a single reciprocally connected curve and is thus bamboozled into seeing a point beyond as where things have to get going from. It becomes the perennial problem for the metaphysics of creation.

    But if you instead take the alternative view - the reciprocal view that is as old as Anaximander - then the beginning is the beginning of a counterfactual definiteness. And that takes two to tango. Both the action and its context - as the primal, unit 1, fluctuation - are there together as the "smallest possible" start point.

    Where y = 1; x = 1 is the spot that there is both no difference, and yet infinitesimally a difference, in a distinction between location and momentum, or spacetime extent and energy density content. It is the cusp of being. And a full division of being - a complete breaking of the symmety - is what follows.

    Looking back from the infinite future, the starting point might now look like y = 0; x = 0. An impossible place to begin things. But there you go. It is just that you can't see the curve that is the real metaphysical story.

    That kind of absolute space and time - the one where the x and y axes are believed to be represent the actual Cartesian reality in which the Universe is embedded - is just a projection of an assumption. An illusion - even if a usefully simple model if you want to do Euclidean geometry or Newtonian mechanics.

    The Cosmos itself isn't embedded in any such grid. Instead it is the curve that - by the end of its development - has fully realised its potential for being asymptotically orthogonal. So close to expressing a state of Cartesian gridness, Euclidean flatness, Newtonian absoluteness, that the difference doesn't make a damn.

    It gets classically divided at the end. But it starts as a perfect quantum yo-yo balance that is already in play from the point of view of that (mistaken) classical view of two axes which must meet at the big fat zero of an origin where there is just nothing.
  • Metaphysician Undercover
    13.2k
    But If I visited another planet and found only mountains and rivers, plate tectonics and dissipative flows, then I would conclude something else. An absence of intelligent creators. Only the presence of self organising entropy-driven physical structure.apokrisis

    Who would be applying constraints on this planet then? It would not be appropriate to refer the "application of constraints" unless there is something which is applying constraints. You have a habit of talking in this way, as if there is something, some being, applying constraints, or acting in some other intentional way, but when questioned about that you tend to just assume that constraints are applying themselves. Then you proceed into nonsense about self-organizing systems, as if inanimate matter could organize itself to produce its own existence from nothing.

    He emphasised the role of habit instead. Constraints on action that explain both human psychology, hence “freewill”, and cosmology if the lawful regularity of nature is best understood as a habit that develops.apokrisis

    A habit is the propensity of potential to be actualized in a particular way. What is fundamental to "potential" is that no particular actualization is necessary from any specific state of potential. If a specific state of potential tends to actualize in a particular way (habit), there must be a reason for this. The reason cannot be "constraints on action", because the nature of potential is such that no particular actualization is necessary, and constraints would necessitate a particular action negating the nature of "potential", as having no particular actualization necessary. Therefore we must dismiss "constraints on action" as an explanation for habit, and allow that each instance of actualizing a potential must be freely decided, like a freewill action, to maintain the essence of "contingent" as not-necessary.

    This is the difference between a habitual act of a living being, and the necessary act of an inanimate object. The habitual act must be "decided" upon, at each instance of occurrence, or else we cannot truthfully say that there is the potential to do otherwise. So "potential" is excluded from the habitual act if the habitual act is caused by constraints, because the constraints would necessitate the action, and there would be no possibility of anything other than that action. If there is something, such as a being, which applies the constraints, to direct the activity, allowing that the potential might be actualized in some other way if the constraints were not applied, then the not-necessary nature of potential is maintained by the choices of that intentional being applying the constraints. But this implies that some intentional being, acting with final cause is applying the constraints to suit its purpose.

    And note that the argument I’m making seeks to resolve the continuous-discrete debate via the logic of vagueness.apokrisis

    I've explained a number of times now in this thread, the logic of vagueness does not solve any problems, it simply represents them as unsolvable, so we might leave them and not concern ourselves with them, thinking that is impossible to resolve them, instead of inquiring toward the truth of the matter.
  • apokrisis
    7.3k
    It would not be appropriate to refer the "application of constraints" unless there is something which is applying constraints.Metaphysician Undercover

    You are the one referring to the "application". And the obvious answer from my point of view is that the constraints are self-applied. The regularity of habits develops out of nature's own set of possibilities.

    Then you proceed into nonsense about self-organizing systems, as if inanimate matter could organize itself to produce its own existence from nothing.Metaphysician Undercover

    Nonsense? Or science?

    Cosmolology shows how everything is self-organising back to the Planck scale. I provided you with the hyperbolic curve as a model of how there need be "nothing" before this self-organising was already going.

    A habit is the propensity of potential to be actualized in a particular way. What is fundamental to "potential" is that no particular actualization is necessary from any specific state of potential.Metaphysician Undercover

    That is why we are talking about habits developing. At first, everything would try to happen willy-nilly. Then later, things would self organise into an efficient flow.

    If someone shouts fire in the cinema and everyone rushes for the same door, lots of bodies trying to do the same thing at once have the effect of cancelling each other out. There is a chaotic jam and nobody gets anywhere.

    But if the crowd organise into a flow, then everyone can get out in the fastest way possible.

    Rules emerge like this. Just think about how traffic laws emerged to avoid everyone driving like a panicked crowd. Efficient flows always beat inefficient chaos. It is nature's finality. The least action principle.
  • Dan Cage
    12
    Why does ANYTHING HAVE to be resolved? Perhaps “the truth of the matter” is that so-called life, whether by design or accident, is about dealing with the irreconcilable regardless of clarity OR vagueness. Why is it necessary to EITHER embrace or dismiss? It is an individual’s choice. There exists middle ground where one could be open to all possibilities, not just the binary ones.
  • apokrisis
    7.3k
    There exists middle ground where one could be open to all possibilities, not just the binary ones.Dan Cage

    All possibilities are binaries if they are to be clear and not vague. To take a direction, you have to be moving away from whatever is its counterfactual.

    Possibilities come in matched pairs. Or to the degree that they don't, then - as a possibility - they are vague.
  • Dan Cage
    12
    All possibilities are binaries if they are to be clear and not vague. To take a direction, you have to be moving away from whatever is its counterfactual.apokrisis

    True if limited to strictly human thinking. But absolute clarity is unavailable to humans whose emotions, agendas and biases distort clarity. Taking ALL directions seems the way to go. If not...

    Possibilities come in matched pairs. Or to the degree that they don't, then - as a possibility - they are vague.apokrisis

    ...the “matched pairs” way of thinking self-imposes the limit of two choices, BOTH of which MUST include a degree of vagueness since true clarity is elusive at best. Other perspectives persist.
  • apokrisis
    7.3k
    True if limited to strictly human thinkingDan Cage

    Does nature offer counter-examples? What are they?
  • Dan Cage
    12
    Does nature offer counter-examples? What are they?apokrisis

    Nature??? To paraphrase Obi-Wan, “Your perceptions can deceive you. Don’t trust them.”

    What’s the cornerstone of philosophy? Question everything! Linear thinkers succumb to the notion that “nature” is absolute. I submit it is not beyond question, nor is anything, especially human “definitions”. In order to question what we perceive we must first question our own so-called nature.
  • apokrisis
    7.3k
    What’s the cornerstone of philosophy? Question everything!Dan Cage

    Uh huh.

    Or around these parts, question everything and believe nothing. :smile:

    In order to question what we perceive we must first question our own so-called nature.Dan Cage

    That is certainly Epistemology 101.
  • Dan Cage
    12
    ...around these parts, question everything and believe nothing.apokrisis

    Hmmm... I question everything and believe nothing, so it sounds like I fit in. But I do not recognize the Epistemology label so I have not knowingly subscribed to it. I represent no specific ideology, philosophy, religion or science, at least not willingly. To me (and to all humans, I should hope), they are all fallible. Why the dismissiveness?
  • apokrisis
    7.3k
    But I do not recognize the Epistemology label so I have not knowingly subscribed to it.Dan Cage

    That's just what they called the introductory epistemology class back when I was little. Hume, Berkeley, Descartes, Kant. The usual crew.
  • Dan Cage
    12
    That's just what they called the introductory epistemology class back when I was little. Hume, Berkeley, Descartes, Kant. The usual crew.apokrisis


    Interesting and informative, thank you.

    I have encountered occasional quotes from a few philosopher-types over the course of my 64 years... unintentionally. Plato, Dante, Sartre, to name a few. But have never attended a philosophy “class”. And I have agreed with some of those quotes, disagreed with others, not that my opinion matters.

    My discipline is self-taught, self-imposed, though not without influence, of course. If it happens to match, at least in part, an existing line of thought, it is coincidence.

    I seek a forum of what I hope is “original” thought that goes beyond what has been thought to date. What I’ve encountered here and on Arktos, so far, has been debates over which “established” philosophy is closest to being, ahem... “correct”. “Right” and “wrong” are strictly human concepts and are, therefore, incomplete at best and invalid at worst.

    Must I don the cape of my favorite philosophical crusader in order to be “worthy” of this forum? I know that’s not up to you, but some guidance may be helpful. Non-belief is vastly different from disbelief. I am open to anything, but I find very few human thought-inventions compelling. Is there a label for that?
  • apokrisis
    7.3k
    Must I don the cape of my favorite philosophical crusader in order to be “worthy” of this forum?Dan Cage

    Things are pretty relaxed on that score - at least as the price of entry. But philosophy is a dialectical contest. So if you say something easy to bash, then expect that gleeful bashing. That is the price of staying. :up:

    I am open to anything, but I find very few human thought-inventions compelling. Is there a label for that?Dan Cage

    Depends how many philosophical positions you have actually encounter and whether you made a sufficiently compelling case against them really.

    Skeptic would be a good thing to be labelled. It would mean you have mastered the basics of critical thought.
  • Dan Cage
    12
    Skeptic would be a good thing to be labelled. It would mean you have mastered the basics of critical thought.apokrisis


    Excellent advice!

    But I’m not out to “bash” or be “bashed”, nor to “win” an argument in the process. Learning and growth are infinite. I’m content simply to expand. If gain-saying is the winning “formula” here, it would be a waste to participate. I will continue to monitor.

    Thanks much for the enlightening dialect!
  • Metaphysician Undercover
    13.2k
    You are the one referring to the "application". And the obvious answer from my point of view is that the constraints are self-applied.apokrisis

    That use of :application was a quote from your post. Nevertheless, I've explained how }self-applied constraints" is illogical involving contradiction. If the constraints are fixed constraints (what the laws of physics are generally believed to describe) then there can be no potential to behave in any other way, and the constraints are not applied, they are just there. If the constraints are capable of applying themselves, then there must be freedom of application inherent within the constraints themselves. This would mean that there is an element of freedom inherent within the constraint, and this is contradictory.

    Nonsense? Or science?

    Cosmolology shows how everything is self-organising back to the Planck scale. I provided you with the hyperbolic curve as a model of how there need be "nothing" before this self-organising was already going.
    apokrisis

    Such self-organization is not science, it's you attempting to produce a metaphysics which will account for what science gives us in a naturalistic way. And you refuse to accept the contradictions inherent within your naturalistic metaphysics as indication that you ought to move along toward a more acceptable metaphysics. Instead you'd rather appeal to an ontology of vagueness which allows you to leave the contradiction where they lie.

    That is why we are talking about habits developing. At first, everything would try to happen willy-nilly. Then later, things would self organise into an efficient flow.apokrisis

    I see you reject the principle of sufficient reason as well as the principle of non-contradiction. Do you see why I am fully justified in referring to your metaphysics as nonsense?

    Possibilities come in matched pairs.apokrisis

    This is a key point you seem to be missing about possibility, or potential. Potential is completely incompatible with with the bivalent system, and therefore needs to be represented in a completely different way. Peirce clearly pointed this out. Possibility is something general. If it is reduced to a particular possibility such that we can represent its binary opposite, we are not representing the possibility properly, because possibility always relates to numerous things, not one thing. If actualizing possibility X means not actualizing possibility Y, this does not mean that X is the opposite of Y. But this is also why the idea of infinite potential, or possibility, is nonsensical. It leaves nothing actual to make the choice as to which possibility will be actualized. A possibility does not have the capacity to actualize itself.
  • apokrisis
    7.3k
    If the constraints are fixed constraints (what the laws of physics are generally believed to describe)...Metaphysician Undercover

    The current approach in cosmology and particle physics would be to see any global regularity in terms of emergent constraints. That is why symmetry and symmetry breaking are at the heart of modern physics. They describe the form of nature in terms of the complementary emergent limits on free actions. A probabilistic view where change is change until change can no longer make a difference. At that point, the system is "stable" and its equilibrium balance can be encode as "a universal law".

    Potential is completely incompatible with with the bivalent system, and therefore needs to be represented in a completely different wayMetaphysician Undercover

    Yes. That is the distinction I have made all along. Potential would be simply a vagueness. The PNC fails to apply. And possibility is the next step along. A possibility is a concrete option. The PNC applies in that to go in one direction is not to go in its "other" direction.

    A possibility is an actuality in that regard. A generalised notion of potentiality in fact. There is now a world, an embedding context or backdrop, where every act is matched by a "reaction". To push is to encounter resistance. To move is to depart.

    It is all made actual and concrete by the fact that every possibility is bivalent. A direction is asymmetric as it breaks - and hence reveals - an underlying symmetry.

    Vagueness is where there just isn't any such general backdrop to local events or acts. If you are in a canoe in a thick fog on a still lake, do you move or are you still? The PNC can't apply unless there is some context to show that a change is happening, and even not happening.

    But when the fog lifts, we have reference points. We are either moving or not moving as the clearly bivalently complementary options now. We have a choice between the two opposed possibilities. The PNC becomes a legitimate rule.

    Possibility is something general. If it is reduced to a particular possibility such that we can represent its binary opposite, we are not representing the possibility properly, because possibility always relates to numerous things, not one thing.Metaphysician Undercover

    Yep. You dispute the distinction between vague potential and crisp possibility and then repeat the basic argument.

    As Peirce says, the trajectory is from Firstness to Thirdness, from vagueness to generality. Actuality as a set of concrete local possibilities emerges via the contextual regularisaton of a vagueness, a unformed potential, by generalised habits. A prevailing state of global constraints.

    The generality of a backdrop is needed as the symmetric reference frame that orientates local possibilities as the bivalent symmetry-breakings, or asymmetries. That is what I have said all along.

    And hence you need the further category of vagueness to stand behind this evolutionary development. The generality of a backdrop or symmetry state has to arise out of "something" too.

    If actualizing possibility X means not actualizing possibility Y, this does not mean that X is the opposite of Y.Metaphysician Undercover

    It is not X and Y that speaks to bivalence. It is X and not-X.

    A possibility does not have the capacity to actualize itself.Metaphysician Undercover

    Sure. In your mechanical model of reality.

    The Peircean model says vagueness is only regulated. So there is always chance or spontaneity to affect things. Regulation is asymptotic. It can approach the limit but never actually completely reach it. So infinitesimal chance always remains in the system to tip the balance.

    That is why quantum mechanics can work. Or any other form of spontaneous symmetry breaking in physics.

    You only need a system to be symmetrically poised between its two directions - the choice over a concrete action. Something is always going to tip the balance. Nature just fluctuates at a fine-grain level and chance will give the poised system its nudge that then actualises the possibility.

    It is the old paradox of a ball balanced on the peak of a rounded dome or pencil balanced on its sharp tip.

    Newtonian physics says a perfectly balanced ball or pencil could never topple. Nature says that - quantum mechanically - the world is just never that still. There will always be a slightest vibration. And the slightest vibration is all that is needed for the ball or pencil to spontaneously break its symmetry and so actualise a possibility.
  • jgill
    3.9k
    In the world of pure mathematics, a vector field in the complex plane describing a function F(z) having an "indifferent" fixed point a=F(a) might show the enormous differences of displacement as z=a is "tipped" a tiny bit to one side or the other. On one side it might immediately be iterated back toward a, while on the other side it might be sent far out in the plane. It often happens, however, even in the latter case the iterated point will eventually be brought back to the neighborhood of a. So this would be a kind of symmetry breaking in the complex plane. This happens with what is called a parabolic linear fractional transformation, as an example.
  • apokrisis
    7.3k
    In the world of pure mathematics, a vector field in the complex plane describing a function F(z) having an "indifferent" fixed point a=F(a) might show the enormous differences of displacement as z=a is "tipped" a tiny bit to one side or the other.jgill

    Yes. But what if this non-linear sensitivity is being regulated by a parameter that is a reciprocal relation such as y=1/x? And so yx = 1?

    A tiny tip one way is yoked to a tiny tip that compensates. Unit 1 has been fixed as the identity element, the common departure point. The indifference lies in now giving it any particular value to denote some quantified scale. It is now always just a generalised quality - the way an identity element behaves as a symmetry awaiting its breaking.

    Other examples of starting values that emerge as the balances of divergences.

    The value of pi - understood as the ratio of a circumference to a diameter - can vary according to the geometry of a plane. Pi = 2 for the closed or negatively curved surface of a sphere. Pi heads for infinity in the opposite case of the positively curved hyperbolic plane.

    It is only the special case - the Euclidean plane, where lines can remain parallel to infinity, never converging or diverging - that the ratio is a familiar fixed constant. 3.14159...

    Euler’s number or e is perhaps a clearer case in being the constant that emerges from the "self-referential" reciprocal built into a pure model of continuous compounding growth.

    That is, f(x) = e^x graphed as a curve which intersects at y = 1; x = 0.

    The system is set to the most general initial value - 1 - before any growth has had the time to be added. And then the slope it generates by x = 1 is e - or 2.7182818... The unit 1 picture spits out a pi-like constant - a universal scale factor - for the dynamics of self-compounding growth.

    function-exponential-slopes.svg
    https://www.mathsisfun.com/numbers/e-eulers-number.html

    With the Planck scale, the physics wants to run it backwards to recover the "unit 1" reciprocal equation that is the Universe's own universal scale factor. That is the thought motivating this particular game here.

    Okun's cube says it must take all three fundamental Planck constants in a relationship to recover that unity. If all three constants - h, G and c - can fit into one theory, then that is the theory of everything.

    General relativity unifies two of them - G and c. Quantum field theory unifies another pair - h and c. So unifying all three is about a combined theory of quantum gravity.

    At which point everything collapses into confusion as it is a completely self-referential exercise. There is nothing "outside" as the yardstick of measurement. It is all reduced to some internal interplay.

    Well, this is why efforts like Loop Quantum Gravity have tried to extract realistic solutions as emergent features from the kind of self-organising reciprocal thinking I describe.

    If you frame the quest as getting back to where time and space are coordinates set to zero, then energy density has to be infinite. Neither extreme is a sensible answer to the question.

    But if instead the general answer - from a dimensional analysis - is that everything starts from 1, that gives you a fundamental grain to grab hold of. You have a yo-yo balance to swing on. You can extract a log/log powerlaw slope that is the dynamics of an expanding~cooling Cosmos. The energy density thins as the spacetime spreads. The rate of both is yoked together, as scaled by the speed of light - the third side to this "unit 1" Planck story.

    So how small and hot was the Universe at the Big Bang? The answer is 1. Or rather so hot and massive that it was as small and curled up as possible in terms of its scale factor. And vice versa. It was so hot and massive it was striving as hard as possible to blow itself apart in every direction. It's spatiotemporal curvature was just as much hyperbolic or positive as it was spherical or negative.

    By the Heat Death, the end of time, the scale factor is still "1" but now in an inverse fashion. Everywhere is so cold and empty that the gravitational curvature - the stress tensor of GR - is at its weakest possible value. Almost zero, or 1/G, of what it had been. And the same for h as a measure of the quantum uncertainty or positive hyperbolic curvature wanting to blow things apart. Effectively it has fallen to 1/h or nearly no curvature in that direction either.

    So the Universe stays "flat" and follows its unit 1 scale factor trajectory as a spreading~cooling bath of radiation. But that conceals the trauma that is the Big Bang as a state of unresolved tension - the maximum difference in terms of being the "largest" energy density packed into the "smallest" spacetime. And the Heat Death as the evolution towards the calmest expression of that driving tension - its dissipation into its own reciprocal state of being the smallest energy density packed into the largest spacetime.

    I'm sure I'm only writing this out for my own amusement. But I just find it a fascinating story.

    A different kind of "maths" results from setting your origin to 1,1 rather than 0,0. It constrains any path being traced to something nicely tamed by its own self-referential set-up.
  • jgill
    3.9k
    Yes. But what if this non-linear sensitivity is being regulated by a parameter that is a reciprocal relation such as y=1/x? And so yx = 1? A tiny tip one way is yoked to a tiny tip that compensates. Unit 1 has been fixed as the identity element, the common departure point. The indifference lies in now giving it any particular value to denote some quantified scale. It is now always just a generalised quality - the way an identity element behaves as a symmetry awaiting its breaking.apokrisis

    Sorry, we must be talking past one another again. I have no idea what you are saying. Here is a parabolic LFT having a neutral or indefinite or indifferent fixed point in C. Depending upon the value of K one gets the behavior I described before.

  • apokrisis
    7.3k
    So what was the point you hoped to make? How does it relate to the physics of a Big Bang universe? Break it down for me so that I might understand. Give us an example in a physical context.
  • Metaphysician Undercover
    13.2k
    The current approach in cosmology and particle physics would be to see any global regularity in terms of emergent constraints. That is why symmetry and symmetry breaking are at the heart of modern physics. They describe the form of nature in terms of the complementary emergent limits on free actions. A probabilistic view where change is change until change can no longer make a difference. At that point, the system is "stable" and its equilibrium balance can be encode as "a universal law".apokrisis

    This does not account for the problem that I mentioned, which is the issue of saying that the constraints apply themselves in this type of emergence. For the constraints to be applying themselves, the thing being constrained, indeterminacy, freedom, or whatever you want to call it, must be an inherent part of the constraints, thus allowing the constraints the freedom of application. Combining these two in this way provides you with no possibility of separating them in analysis for the purpose of understanding, and you are left with a vague union of constraints and the thing constrained rendering them both as fundamentally vague, unintelligible.

    Yes. That is the distinction I have made all along. Potential would be simply a vagueness. The PNC fails to apply.apokrisis

    You are not applying Peirce's distinction between internal and external application of the LNC, as described by Lane in your referred article. There is a difference between saying 'x is red and x is not red', and saying 'it is true, and it is not true, that x is red'. The former is a proper violation of the LNC, the latter indicates an improper definition, or faulty representation of 'red'.

    So if the terms of bivalent logic fail to apply in the proposed predication, then we have an improper proposal for predication, a faulty representation of the relationship between the subject and the property to be predicated, such that the LEM is actually what is violated as 'neither/nor'. But the LNC is not actually violated in this case, that it is violated, is an illusion created by an improper proposition. That's the point which Aristotle made with the concept of "potential", insisting that the LNC still applies, as he employed this principle against the sophists who based arguments in improper propositions for the sake of proving absurdities.

    It is all made actual and concrete by the fact that every possibility is bivalent. A direction is asymmetric as it breaks - and hence reveals - an underlying symmetry.apokrisis

    This is the false representation, or description. Potential itself, as ontologically existing potential, indeterminacy in the universe, is not what is bivalent. It is the epistemic possibility of predication, represented as particular possibilities, or as you say above, "every possibility", which is bivalent. The ontologically existing potential remains outside the LEM, and cannot be predicated because the proper terms to describe it have not been developed. Nor can the ontological potential, which we describe in general terms, be expressed as particular possibilities. Therefore you have demonstrated a category mistake here.

    The category mistake you are making is that you are taking the ontological potential, described as "underlying symmetry" which inherently violates the LEM due to our inability to describe it, and you are representing it as epistemic possibilities which are bivalent. Then you insist that it violates the LNC. But you have not created the necessary bridge across this gap between categories, so you claim the real thing, the ontological potential, violates the LNC, when in reality it violates the LEM. Therefore, you are really just expressing the desire to violate the LNC to allow the improperly described "potential" into your bivalent system without providing the necessary terms of description which are required to truthfully bring it into the bivalent system coherently.

    Vagueness is where there just isn't any such general backdrop to local events or acts. If you are in a canoe in a thick fog on a still lake, do you move or are you still? The PNC can't apply unless there is some context to show that a change is happening, and even not happening.apokrisis

    If you analyze your own example here, you'll see that you cannot apply the PNC in this situation because of a deficient description of the situation, due to the fog. The deficient description creates the illusion that the PNC cannot be applied to the situation. However, that's just an illusion, and all we need to do is provide the adequate description (see through the fog) and then the PNC can be applied. So in reality, a claim such as "the PNC can't apply" is never warranted, because any time that it appears like this is the case, we need to make the effort to find the appropriate description so that we can apply it.

    You dispute the distinction between vague potential and crisp possibility and then repeat the basic argument.apokrisis

    Huh? The "vague potential" we are talking about is ontological indeterminacy, real potential in the world. A "crisp possibility", is a described situation, an epistemic principle. In no way do I repeat your category mistake by repeating your argument.

    The Peircean model says vagueness is only regulated.apokrisis

    We are not talking about "the Peircean model" here. We are talking about the apokrisist model, which utilizes an idiosyncratic interpretation of Peirce, along with a huge category mistake (perhaps initiated by Peirce).
  • jgill
    3.9k
    ↪jgill
    So what was the point you hoped to make? How does it relate to the physics of a Big Bang universe? Break it down for me so that I might understand. Give us an example in a physical context.
    apokrisis

    No relation. Your description of a "tipping point" in physics caused me to see certain neutral fixed points in complex dynamics from that perspective. Moving a tiny distance away in one direction gives a value that iterates back to the fixed point, whereas moving a tiny distance away in another direction gives a value that quickly iterates far away from the fixed point, although it may eventually return. Of course, a repelling fixed point would send any point close by further away.

    I've written over 175 mathematics programs, most focused on graphics illustrating mathematical concepts, but not being a subscriber on this forum means I can't upload the graphic imagery. A simple vector field in this case would show immediately what I have described. But I realize this is not the topic of the thread, so I apologize for deviating :yikes:

    Are you a physicist? You seem very knowledgeable.
  • apokrisis
    7.3k
    I realize this is not the topic of the thread, so I apologize for deviatingjgill

    That’s OK. I was just confused trying to figure the relevance.
  • apokrisis
    7.3k
    The "vague potential" we are talking about is ontological indeterminacy,Metaphysician Undercover

    So is the vagueness of a quantum potential ontological or epistemic? Do you believe nature is counterfactual all the way down despite the evidence?
  • Metaphysician Undercover
    13.2k
    So is the vagueness of a quantum potential ontological or epistemic?apokrisis

    I think it's very clearly epistemic, as the uncertainty of the Fourier transform, to me is clearly an epistemic vagueness.

    Do you believe nature is counterfactual all the way down despite the evidence?apokrisis

    I think I've sufficiently explained this already. What you claim as "evidence" of ontological vagueness is simply a failure in human description, i.e. inadequate description. If my eyes are not good, and I cannot distinguish whether an object is, or is not red, due to apparent vagueness, I might be inclined to say that it is both, or neither, if I am unwilling to accept the fact that my eyes are deficient, and admit this. Likewise, if the mathematical, and physical principles by which a physicist understands quantum potential, makes this thing called "quantum potential" appear to be vague, the physicist might not be willing to accept the fact that the apparent vagueness is due to deficiency in the principles.

    So the physicists can't properly describe this aspect of reality because it appears vague to them. And you, instead of turning to other principles like theological principles, which I've argued provide a better description of the temporal aspect of reality than those principles adopted by physicists, refuse to even look this way. Instead you adhere to your biased scientistic metaphysics, assuming that if the physicists cannot describe it, it cannot be described, therefore the vagueness must be real, ontological.
  • apokrisis
    7.3k
    I think it's very clearly epistemic...Metaphysician Undercover

    But hidden variables have been experimentally ruled out. If it is epistemic, you are left with a truly pathological metaphysics like MWI as your only refuge.

    I'm sticking to the science here. The PNC fails to apply to the internals of the wavefunction. The PNC is an emergent feature of the classical scale where the wavefunction collapse has actualised some concrete possibility and so any remaining indeterminacy certainly is epistemic.

    the physicist might not be willing to accept the fact that the apparent vagueness is due to deficiency in the principles.Metaphysician Undercover

    Physicists in fact tried their hardest to avoid ontic vagueness. They invented the MWI as one way not to have to admit defeat.

    In the end, the "deficiency" is in the metaphysical reductionism that frames the problem - the framework both you and the MWIers share by insisting ontic vagueness is impossible from a classical viewpoint where everything has counterfactual definiteness from the get-go.
  • Metaphysician Undercover
    13.2k
    But hidden variables have been experimentally ruled out. If it is epistemic, you are left with a truly pathological metaphysics like MWI as your only refuge.apokrisis

    What I described is not hidden variables, it's faulty principles. That is epistemic, and it does not lead to MWI, far from it.

    Physicists in fact tried their hardest to avoid ontic vagueness.apokrisis

    I don't see any physicists addressing the deficiency in their conception of time, which I described in this thread, to adopt a conception which is consistent with our experience of time. Our experience of time indicates that there is a substantial difference between future and past, and therefore no necessary continuity of substance at the present. If physicists had respect for this, they would seek the cause of continuity instead of taking it for granted, as conservation laws. The problem, as we discussed, is that physics is pragmatic, purpose driven toward the goal of prediction. Understanding the real nature of the universe is not the goal of modern physics, so the principles employed by physicists are not designed for this purpose. They are designed for prediction, not for understanding what makes prediction possible.
  • jgill
    3.9k
    Our experience of time indicates that there is a substantial difference between future and past, and therefore no necessary continuity of substance at the present.Metaphysician Undercover

    It seems to me your "therefore" does not logically follow. The "substantial difference" requires a temporal distancing. I suppose you discard elementary calculus, with its notion of time continuity, and its many physical applications. Maybe not. Following arguments in philosophy is sometimes like trying to separate the filaments of cotton candy.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.