• Agustino
    11.2k
    As the first fluctuation, it would have as yet no context. History follows the act.apokrisis
    Okay, no disagreement there.

    It seems the mistake you keep making is to forget I am arguing for the actualisation of a dichotomy - the birth of matter and form in a first substantial event. You just keep talking about the material half of the equation.apokrisis
    It's difficult to make sense of what you're trying to say here because you're using words differently from Aristotle it seems to me. Matter is inert, it is form which is act, and actualises. So form is imposed on the inert matter (which is potential), and this form would be the fluctuation. But note that form must be independent to and prior to matter.

    You get infinite outcomes if your model offers no lower bound cut-off to limit material contributions.apokrisis
    Right, so then the mathematical concept of space as infinitely divisible isn't how real space actually is. It's important to see this.

    Our measurements coarse grain over fractal reality. We are happy to approximate in this fashion. And then even reality itself coarse grains. The possibility of contributions must be definitely truncated at some scale - like the Planck scale - to avoid an ultraviolet catastrophe.apokrisis
    Yeah, so reality eliminates all those infinities that are inherent in our mathematical models. Our initial predictions that blackbodies would emit infinite amounts of UV were based on the mistake in our mathematical model of assuming an infinite continuity going all the way down, while the truth is that things are cut off at one point, they become discrete.

    Vagueness is required at the base of things to prevent the disaster of infinite actualisation.apokrisis
    I don't follow this.

    Also how much do you understand fractals? Note how they arise from a seed dichotomy, a symmetry breaking or primal fluctuation. That is what the recursive equation with its log/log growth structure represents.apokrisis
    Well that's a metaphorical way to put it regarding the "primal fluctuation". They do arise from a symmetry breaking, or rather that the process of constructing a fractal involves a symmetry breaking. Regarding the recursive eq, are you talking about fractal dimensionality? As in log(number copies)/log(scale factor)?
  • Metaphysician Undercover
    12.3k
    But another example of the vagueness/PNC~generality/LEM dichotomy which is basic to his logic is the triangle. A triangle is a general concept that forms a continuum limit - a global constraint - that then can't be exhausted by its particular instances. An infinite variety of particular triangles can be embraced by the general notion of a triangle.

    So the LEM does not apply to this generality as a triangle can, in genus~species fashion, be equilateral, isosceles, or scalene. Of course the triangle must be a three-sided polygon, but that is talking of a still higher level generality of which it now partakes as a definite particular.
    apokrisis

    OK, so the concept of a triangle is "a plane figure with three sides and angles". Let me see if I can figure out how the LEM does not apply here. You say that any particular triangle, must be one of a number of different types of triangles. Where does the LEM not apply? It doesn't make sense to say that the concept of triangle in general must be a particular type of triangle, just like it doesn't make sense to say that the concept of animal, in general must be a particular type of animal. So it's just a case of categorical separation. It doesn't make sense to attribute a species to the genus, that's a category error, not a failure of the LEM. If we insist that the concept of colour must be either red or not red, and find that this is impossible, it is not that the LEM does not apply, it is a simple category mistake.

    Then vagueness is defined dichotomously to the general. Where generality allows you to say any particular triangle can be either scalene or isosceles, vagueness speaks to the indefinite case where there is as yet no triangle specified and so there is no fact of the matter as to whether it is scalene or isosceles. It is not a contradiction to say the potential triangle is both.apokrisis

    This I can't make any sense of. We have the concept of triangle. Your claim seems to be that if there is no particular triangle, then this particular triangle the potential triangle, may be both scalene and isosceles. I'm sorry to have to disappoint you, but yes, this is contradictory. Don't you see that you are saying that there is no particular triangle, then you say that this particular triangle, the potential triangle, which has already been denied, is both. Don't you see that to affirm something and deny it, both, is contradiction? To say that there is the potential for a triangle which is either isosceles or scalene, is not the same thing as saying that there is a potential triangle which is both. The latter affirms that there is a particular triangle, the potential triangle, which is both isosceles and scalene. And that's contradictory nonsense.
  • apokrisis
    6.8k
    It's difficult to make sense of what you're trying to say here because you're using words differently from Aristotle it seems to me.Agustino

    Well yes. I must have spent quite a few pages in this thread making it plain that my claim is that prime matter would be an active and not inert principle. Then the prime mover would not be an active principle in the normal sense of effective cause, just "active" in the sense of an emergent limiting constraint on free material action.

    There are quite a few difference I would have with a scholastic understanding of Aristotelean metaphysics. That was rather the point.

    Matter is inert, it is form which is act, and actualises. So form is imposed on the inert matter (which is potential), and this form would be the fluctuation. But note that form must be independent to and prior to matter.Agustino

    Now you are repeating what I have disputed. And I have provided the rationale for my position. So instead of just citing scholastic aristoteleanism to me, as if that could make a difference, just move on and consider my actual arguments.

    Right, so then the mathematical concept of space as infinitely divisible isn't how real space actually is. It's important to see this.Agustino

    It is also important to see that Peirce's mathematical conceptions are based on the duality of generality and vagueness. So you can both have a general continuum limit and also find that it has potential infinity in terms of its divisibility. In fact, you've got to have both.

    And funnily enough, real space is like that. Just look at how we have to have the duality of general relativity and quantum mechanics to account for it fully. One describes the global continuity of the constraints, the other, the local infinite potential, the inherent uncertainty that just keeps giving.

    Yeah, so reality eliminates all those infinities that are inherent in our mathematical models. Our initial predictions that blackbodies would emit infinite amounts of UV were based on the mistake in our mathematical model of assuming an infinite continuity going all the way down, while the truth is that things are cut off at one point, they become discrete.Agustino

    It amusing that you talk about this as some mathematical mistake.

    You are trying to paint yourself as the commonsense engineer that is never going to be fooled by these crazy theoretical types with their dreadful unrealistic mathematical models. And yet an engineer has a metaphysics. He believes in a world of clockwork Newtonian forces. That is the right maths. And on the whole it works because the universe - at the scale at which the engineer operates - is pretty much just "classical". There is no ontic vagueness to speak of.

    Of course, the beam will buckle unpredictably. An engineer has to know the practical limits of his classically-inspired mathematical tools. The engineer will say in theory, every micro-cause contributing to the failure of the beam could be modelled by sufficiently complex "non-linear" equations. The issue of coarse graining - the fact that eventually the engineer will insert himself into the modelling as the observer to decide when to just average over the events in each region of space - is brushed off as a necessary heuristic and not an epistemic embarrassment.

    Even proof that the model can't be computed in polynomial time won't dent the confidence of "a real engineer". Good enough is close enough. Which is why real world engineering projects fail so regularly.

    So forget your engineer's classically-inspired commonsense understanding of maths here. Peirce was after something much deeper, much more metaphysically sophisticated.

    Regarding the recursive eq, are you talking about fractal dimensionality? As in log(number copies)/log(scale factor)?Agustino

    Yes.
  • apokrisis
    6.8k
    You say that any particular triangle, must be one of a number of different types of triangles. Where does the LEM not apply?Metaphysician Undercover

    Of course the LEM applies to any particular triangle. It doesn't apply to the notion of the general triangle.

    It doesn't make sense to say that the concept of triangle in general must be a particular type of triangle,Metaphysician Undercover

    Exactly.

    It doesn't make sense to attribute a species to the genus, that's a category error, not a failure of the LEM.Metaphysician Undercover

    Yep. The LEM fails to apply. It doesn't even make sense to think it could. It is definitional of generality that it doesn't.

    Your claim seems to be that if there is no particular triangle, then this particular triangle the potential triangle, may be both scalene and isosceles.Metaphysician Undercover

    Before a particular triangle has been drawn, it may be scalene or isosceles. That is the potential. And so while still just a potential, it is not contradictory to say this potential triangle is as much one as the other. That is, what it actually will be is right at this moment vague - as defined by the PNC not being applicable and any proposition that pretends otherwise being a logical failure.
  • Metaphysician Undercover
    12.3k
    Of course the LEM applies to any particular triangle. It doesn't apply to the notion of the general triangle.apokrisis

    The laws of logic are rules of predication, how we attribute predicates to a subject. If your subject is the general notion of a triangle, the rules apply. The subject is identified as the triangle, by the law of identity, and the other two rules of predication apply.

    The LEM fails to apply. It doesn't even make sense to think it could. It is definitional of generality that it doesn't.apokrisis

    It is only when you define "generality" in the odd way which you do, as a potential particular, that the laws of logic might fail to apply. But this definition is a category mistake because there is a well respected categorical separation between the general and the particular, and you seem to define the general as a type of particular.

    You justify this denial of the categorical separation by claiming that the distinction between general and particular is relative only. The triangle is general in relation to isosceles but particular in relation to geometrical figure. But all you have here is relations of generalities. You have not identified a particular. The PNC and LEM rely on the law of identity, the identification of a subject. Until you ,move to identify a particular, it is a foregone conclusion that the laws of logic do not apply.

    So you can talk about your generalities all you want, and how the laws of logic are inapplicable to your talk about generalities, but this is just an epistemic failure on your part. It is a failure to identify a particular in order to move forward using the laws of logic. These claims you make about generalities have no ontological bearing, because there is no evidence that such unidentifiable generalities exist anywhere but in the indecisive human mind. And when you move to identify "generality" as a particular thing with ontological status, like apeiron, vagueness, or pure potential, we can apply the principles of logic, despite the fact that you do not want us to, and will not listen to our conclusions.

    Before a particular triangle has been drawn, it may be scalene or isosceles.apokrisis

    Don't you see this as nonsense? There is no triangle. It hasn't been drawn, it isn't even conceived of in the mind of a person who might draw it. Yet you say that it may be scalene or isosceles. That's nonsense, there is nothing there. If you instruct a person to draw a triangle, we might say that the person has these options, but that is not to say that there is a potential triangle which is both scalene and isosceles.

    And so while still just a potential, it is not contradictory to say this potential triangle is as much one as the other. That is, what it actually will be is right at this moment vague - as defined by the PNC not being applicable and any proposition that pretends otherwise being a logical failure.apokrisis

    That's nothing but irrational nonsense. You have identified the potential for a triangle, a person might draw a triangle. From here, you want to identify the triangle which might be drawn, and say that since it's not drawn yet, it's both isosceles and scalene. But that's nonsense, because the person might draw a square or a circle, or nothing at all.

    The triangle is identified as potential, and this means that its existence is contingent. If its existence is contingent, then it may or may not be. If there is reason to believe that the existence of the triangle will be necessitated (the person was instructed to draw a triangle), we can consider the possibilities. It may be isosceles, it may be scalene, etc.. But to say that there is an identified triangle, the "potential triangle", and it "is as much one as the other", is pure nonsense, because what is actually the case is that the probability for one triangle is equal to the probability for the other triangle.

    Therefore there is not an identified "potential triangle" which consists of all the contrary possibilities. There is the possibility for many different triangles, contrary triangles. Each potential triangle has features which are consistent with the laws of logic. So we can adequately describe the situation without the irrational nonsense which insists that the laws of logic do not apply. The situation of "potential triangle" is treated as the possibility for many different triangles, as well as other things, each consistent with the laws of logic. It is not treated as one triangle which has features which are inconsistent with the laws of logic. The latter is irrational nonsense, and there is no need for the claim that laws of logic do not apply.
  • Agustino
    11.2k
    Sorry Apo, been very busy, will try to get to your comments here in the next few hours.
  • apokrisis
    6.8k
    The laws of logic are rules of predication, how we attribute predicates to a subject. If your subject is the general notion of a triangle, the rules apply. The subject is identified as the triangle, by the law of identity, and the other two rules of predication apply.Metaphysician Undercover

    You are avoiding the point. Peirce is dealing with how the laws could even develop. You are talking about the laws as they would apply when the world has crisply developed, when everything is mostly a collection of objects, a settled state of affairs, a set of atomistic facts.

    So sure, generals can have universality predicated of them. They can be said to cover all instances of some class. They can themselves be regarded as particular subjects. That is what make sense once a world has developed and generals come to be crisply fixed within the context of some evolved state of affairs.

    The PNC and LEM rely on the law of identity, the identification of a subject. Until you ,move to identify a particular, it is a foregone conclusion that the laws of logic do not apply.Metaphysician Undercover

    Correct. Except now I'm talking about how crisp particularity itself could develop. It is hylomorphic substantial being. And it develops out of what it is not - vagueness and generality. Peirce's version of prime matter and prime mover.

    So the laws of thought don't apply until they start to do. That is what a developmental ontology is claiming. Peirce described the Cosmos as the universal growth of reasonableness. The lawfulness the laws encode are the product of evolution and self organisation.

    There is no point you just telling me you don't see the laws as a product of development. I already know that you just presume their natural existence. You have never inquired how the laws might come to be as the result of a larger ur-logical process.

    So why not set aside your predudices and actually consider an alternative metaphysics for once? Make a proper effort to understand Peirce rather than simply assert that existence exists and that's the end of it.
  • Metaphysician Undercover
    12.3k
    You are avoiding the point. Peirce is dealing with how the laws could even develop. You are talking about the laws as they would apply when the world has crisply developed, when everything is mostly a collection of objects, a settled state of affairs, a set of atomistic facts.

    So sure, generals can have universality predicated of them. They can be said to cover all instances of some class. They can themselves be regarded as particular subjects. That is what make sense once a world has developed and generals come to be crisply fixed within the context of some evolved state of affairs.
    apokrisis

    The laws of logic were produced, and developed by human beings. They are human statements of how to proceed in logical process. Surely they have only existed in an evolved state of affairs. The claim that there was a time when the universe didn't consist of a collection of objects would need to be justified. The logical principles, and evidence which would be used to justify this claim would provide information as to whether this universe without particulars would consist of anything like what we call generalities. As discussed, the logical principles demonstrate that this is an irrational claim. If you have evidence, you should describe it, rather than repeating over and over again assertions about symmetry-breakings.

    So the laws of thought don't apply until they start to do. That is what a developmental ontology is claiming. Peirce described the Cosmos as the universal growth of reasonableness. The lawfulness the laws encode are the product of evolution and self organisation.apokrisis

    The laws of logic are only applied by human beings, who started to apply them a few thousand years ago, along with the development of language. If some human beings believe that there was a time when the universe existed, but its existence cannot be understood by the laws of logic, then the principles for this claim need to be demonstrated, because it appears to be an irrational claim when examined in relation to accepted ontological principles.

    There is no point you just telling me you don't see the laws as a product of development. I already know that you just presume their natural existence. You have never inquired how the laws might come to be as the result of a larger ur-logical process.

    So why not set aside your predudices and actually consider an alternative metaphysics for once? Make a proper effort to understand Peirce rather than simply assert that existence exists and that's the end of it.
    apokrisis

    I see the laws of logic for what they are, principles set down by human beings for the purpose of carrying out logical proceedings. There is no question of whether they are naturally occurring, that would be a very odd thought, because they are clearly artificial.

    As for making an effort to properly understand Peirce, you've referred me to some of his papers in the past, and I've determined some mistakes, one of which we are going over in this thread. So you should actually set aside some of your Peircean bias, to consider these problems in a reasonable way.
  • apokrisis
    6.8k
    The laws of logic were produced, and developed by human beings.Metaphysician Undercover

    Sure we framed them to explain the world as we have found it. The deeper question is why the existence of that intelligible world? If the laws were merely social constructs, they would hardly hold a foundational place in our methods of reasoning.

    . The claim that there was a time when the universe didn't consist of a collection of objects would need to be justifiedMetaphysician Undercover

    Fer fuck's sakes. If existence isn't eternal, it must have developed or been created. Being created doesn't work as that leads to infinite regress in terms of claims about first causes. So development is the metaphysical option worth exploring - rather than being pig-headed about, as is your wont.

    And then cosmology gives good support to that metaphysical reasoning. Look back to the Big Bang and you don't see much evidence for the existence of a collection of objects.
  • Agustino
    11.2k
    As a dichotomous growth process, they directly model this issue of convergence towards a limit that I stressed in earlier posts.

    Think about the implications of that for a theory of cosmic origination. It argues that a world that can arise from a symmetry breaking - a going in both its dichotomous directions freely - does in fact have its own natural asymptotic cut off point. The Planck scale Big Bang is not a problem but a prediction. Run a constantly diverging process back in time to recover its initial conditions and you must see it converging at a point at the beginning of time.

    This has in fact been argued as a theorem in relation to Linde's fractal spawning multiverse hypothesis. So if inflation happens to be true and our universe is only one of a potential infinity, the maths still says the history of the multiverse must converge at some point at the beginning of time. It is a truly general metaphysical result.
    apokrisis
    Yes, that is indeed an interesting implication of any growth process that depends on symmetry breakings - it must ultimately reduce itself to a beginning point.

    Another way to illustrate this is how we derive the constant of growth itself - e. Run growth backwards and it must converge on some unit 1 process that started doing the growing. Thus what begins things has no actual size. It is always just 1 - the bare potential of whatever fluctuation got things started. So a definite growth constant emerges without needing any starting point more definite than a fleeting one-ness.apokrisis
    I don't follow how "what begins" has no actual size. Fractals always have some size. Even the simplest ones like Koch curve start from some definite size of a simple line segment. But see the absence of a definitive perimeter, combined with things like having no tangents at any points, make such fractals strange mathematical objects, which may approximate some real objects, but not in this lack of definitiveness.

    Now you are repeating what I have disputed. And I have provided the rationale for my position. So instead of just citing scholastic aristoteleanism to me, as if that could make a difference, just move on and consider my actual arguments.apokrisis
    Okay. But I don't see anything in your position that could elude what Aristotle has determined. You have redefined the terms, but this redefinition does not save you from the requirement that there is a prior act to all potency (using these terms to mean what Aristotle meant by them). As I said, it makes sense when you say that everything reduces to a primal fluctuation. I can understand that. But I cannot understand the movement from primal fluctuation to ontic vagueness - that sounds contradictory to me.

    It is also important to see that Peirce's mathematical conceptions are based on the duality of generality and vagueness. So you can both have a general continuum limit and also find that it has potential infinity in terms of its divisibility. In fact, you've got to have both.apokrisis
    Yeah, much of actual math is done this way. The difficulties of infinite divisibility and the like are avoided through limit calculus in practice while doing math. This is fine so long as you are aware that you're just doing math. Limit calculus enables you to perform through an infinity of operations and arrive at a definite answer - in some cases, those which are convergent. Not all are though, and the cases where there exist problems in physics - such as the Big Bang singularity, are precisely those cases where limits are divergent. Again, such issues illustrate discrepancies between mathematical models and reality.

    And funnily enough, real space is like that. Just look at how we have to have the duality of general relativity and quantum mechanics to account for it fully. One describes the global continuity of the constraints, the other, the local infinite potential, the inherent uncertainty that just keeps giving.apokrisis
    Well, we're not sure, we have to wait for quantum gravity to be more fully developed to see what's what. If there is a quantum theory of gravity, then GR will be reduced to it, as would be natural, in my opinion. It's absurd to have a macro theory that cannot be shown to emerge from the micro level.

    And yet an engineer has a metaphysics. He believes in a world of clockwork Newtonian forces. That is the right maths. And on the whole it works because the universe - at the scale at which the engineer operates - is pretty much just "classical". There is no ontic vagueness to speak of.apokrisis
    No, I'm not sure that he believes in the "clockwork Newtonian universe". Depends on what you are engineering. Standard structures will be engineered according to the Newtonian clockwork view of the universe, because it's a close enough approximation - especially when you put factors of safety on top of it.

    But there are many non-standard structures - suspension bridges, skyscrapers in earthquake-prone regions, shell structures and the like which are definitely not engineered according to clockwork Newtonian views. These structures are very difficult to analyse and they can be very sensitive to imperfections. They display dynamic, non-linear behaviour under loads, which is more difficult to analyse because positive feedback loops between an acting load and the response of the structure can be generated, which can lead to collapse - Tacoma Narrows, things like Fokker monoplanes, and the like.

    These structures are generally analysed by computers under different scenarios, with different possible failure mechanisms taken into consideration. Evolutionary algorithms may also be used to determine the right values for certain parameters. Determining the failure mechanisms that should be tested is largely about intuition though ;) .

    Of course, the beam will buckle unpredictably. An engineer has to know the practical limits of his classically-inspired mathematical tools. The engineer will say in theory, every micro-cause contributing to the failure of the beam could be modelled by sufficiently complex "non-linear" equations. The issue of coarse graining - the fact that eventually the engineer will insert himself into the modelling as the observer to decide when to just average over the events in each region of space - is brushed off as a necessary heuristic and not an epistemic embarrassment.apokrisis
    Yes, the phenomenon of buckling is more complicated than our lower bound calculations suggest. Non-linear effects do start to play a role, and there are other mechanisms too - in reinforced concrete beams for example, a phenomenon known as arching can develop making the behaviour of the beam plastic and permitting it to withstand more load than predicted.

    Which is why real world engineering projects fail so regularlyapokrisis
    Actually, real world engineering projects most often are overdeisgned. We just hear about the failures more often than not, but the many successes are forgotten. When you use lower bound approaches combined with factors of safety of 1.5 for structures, and up to 3-4 sometimes for foundations, you are bound to overdesign to a certain extent. Basically whatever answer you calculate you will multiply by the factor of safety to really make sure it's safe - and you are pretty much forced to do so by legislation in many countries, just because failure can lead to death. So better safe than sorry - better to be humble and expect that you don't know than to have false pretences to knowledge.

    Real world structures which do collapse or fail likely do so because they involve an upper bound method of calculation, and the lowest failure mechanism wasn't thought about or taken into account. For example, the World Trade Centers were actually designed to withstand a plane impact. But the actual failure mechanism was never taken into account. They didn't think that if the plane strikes at the right height of the building, the fire can progressively cause steel floors to collapse, and once one floor collapses, the effective height of the steel columns doubles, which means that buckling load becomes 1/4 of what it used to be before (not even taking into account the effect of temperature rise on the columns). So if even more floors collapse, then the demolition-looking collapse of the world trade centre is inevitable as the main columns buckle, and the top part of the building comes crashing down on the bottom part. Even with factors of safety, this mechanism would lead to collapse. So nobody thought about it. And the structure failed.

    There's a lot we still have to learn. Whenever we do a controlled demolishing of a bridge, we often load it to see at what load it actually fails compared to what we predict. They often fail at higher loads - there's a lot left to understand about structures.
  • apokrisis
    6.8k
    Fractals always have some size. Even the simplest ones like Koch curve start from some definite size of a simple line segment.Agustino

    Sure. To model, we need to start at some initial scale. My point was that log e, or Euler's number, shows how we can just start with "unit 1" as the place to start things.

    It may seem like you always have to start your simulation with some definite value. But actually the maths itself abstracts away this apparent particularity by saying whatever value you start at, that is 1. The analysis is dimensionless rather than dimensioned. Even if we have to "stick in a number" to feed the recursive equation.

    You have redefined the terms, but this redefinition does not save you from the requirement that there is a prior act to all potency (using these terms to mean what Aristotle meant by them).Agustino

    Nope. This is the big misunderstanding.

    Sure, irregularity being constrained is what produces the now definite possibilities or degrees of freedom. Once a history has got going, vague "anythingness" is no longer possible. Anything that happens is by definition limited and so is characterised by a counterfactually. Spontaneity or change is always now in some general direction.

    So there is potential in the sense of material powers or material properties - the things that shaped matter is liable to do (defined counterfactually in terms of what it likewise not going to be doing).

    But Aristotle tried to make sense of the bare potential of prime matter. As we know, that didn't work out so well.

    Peirce fixes that by a logic of vagueness. Now both formal and material cause are what arise in mutual fashion from bare potential. They are its potencies. Before the birth of concrete possibility - the kind of historically in-formed potential that you have in mind - there was the pure potential which was a pre-dichotomised vagueness.

    Prime mover and prime matter are together what would be latent in prime potential. Hence this being a triadic and developmental metaphysics - what Aristotle was shooting for but didn't properly bring off.

    It's absurd to have a macro theory that cannot be shown to emerge from the micro level.Agustino

    You keep coming back to a need to believe in a concrete beginning. It is the presumption that you have not yet questioned in the way Peirce says you need to question.

    Until you can escape that, you are doomed to repeat the same conclusions. But its your life. As you say, engineering might be good enough for you. Metaphysics and the current frontiers of scientific theory may just not seem very important.

    Yes, the phenomenon of buckling is more complicated than our lower bound calculations suggest.Agustino

    But you still do believe there is a concrete bottom level to these non-linear situations right? It's still absurd to suggest the emergent macro theory doesn't rest on a bed of definite micro level particulars?

    I mean, drill down, and eventually you will find that you are no longer just coarse-graining the model. You are describing the actual grain on which everything rests?

    People say that the storm in Brazil was caused by the flap of a butterfly wing in Maryland. And you accept it was that flap. The disturbance couldn't have been anything smaller, like the way the butterfly stroked its antenna or faintly shifted a leg?

    I mean deterministic chaos theory doesn't have to rely on anything like the shadowing lemma to underpin its justification of coarse graining "all the way down"?

    In other words, the maths of non-linearity works, to the degree it works, by coping with the reality that there is no actual concrete micro-level on which to rest. And that argues against the picture of physical reality you are trying to uphold.

    The beam buckles because of a "fluctuation". Another way of saying "for no discernible reason at all". Anything and everything could have been what tipped the balance. So the PNC fails to apply and we should just accept that your micro-level just describes the vagueness of unformed action.

    Actually, real world engineering projects most often are overdeisgned.Agustino

    I wonder why. (Well, I've already said why - creating a "safe" distance from fundamental uncertainty by employing informal or heuristic coarse-graining.)

    Real world structures which do collapse or fail likely do so because they involve an upper bound method of calculation, and the lowest failure mechanism wasn't thought about or taken into account.Agustino

    Thanks for the examples, but I know more than a little bit about engineering principles. And you are only confirming my arguments about the reality that engineers must coarse-grain over the best way they can.
  • Agustino
    11.2k
    Sure. To model, we need to start at some initial scale. My point was that log e, or Euler's number, shows how we can just start with "unit 1" as the place to start things.

    It may seem like you always have to start your simulation with some definite value. But actually the maths itself abstracts away this apparent particularity by saying whatever value you start at, that is 1. The analysis is dimensionless rather than dimensioned. Even if we have to "stick in a number" to feed the recursive equation.
    apokrisis
    Okay, but I fail to see how this changes anything :s - I mean sure, you can use whatever number system you want, so effectively you always start with "unit 1" if that's what you want. But how does this change the fact that there is a definitive size to this beginning, regardless of the number/measuring system you choose to use, and hence what you use as the standard for 1 unit?

    Sure, irregularity being constrained is what produces the now definite possibilities or degrees of freedom. Once a history has got going, vague "anythingness" is no longer possible.apokrisis
    Why should I think that this vague "anythingness" was ever possible?

    But Aristotle tried to make sense of the bare potential of prime matter. As we know, that didn't work out so well.apokrisis
    Why do you say it didn't work out? Aristotle showed that prime matter is impossible to exist in-itself. But prime matter does exist in the sense of the underlying potentiality for anything already actual to be other than it is - in other words, the radical potentiality for a chair to change into an elephant, as an example.

    Now both formal and material cause are what arise in mutual fashion from bare potential. They are its potencies.apokrisis
    So is "bare potential" actual?

    Prime mover and prime matter are together what would be latent in prime potential.apokrisis
    :s - if Prime Mover is the potentiality of something else, then it is not Prime Mover anymore. Prime Mover would be whatever lies behind and is pure act.

    You keep coming back to a need to believe in a concrete beginning. It is the presumption that you have not yet questioned in the way Peirce says you need to question.apokrisis
    Yes, because other beginnings are logically contradictory and impossible, just like square circles are impossible.

    But you still do believe there is a concrete bottom level to these non-linear situations right?apokrisis
    Yes, I do believe there is a non-contradictory underlying reality.

    The beam buckles because of a "fluctuation". Another way of saying "for no discernible reason at all". Anything and everything could have been what tipped the balance.apokrisis
    No, absolutely not. The phenomenon of buckling in these non-linear ways is most commonly seen in shell structures. What happens is that there are imperfections in the structure (not perfectly round, etc.). And these tiny imperfections reduce the failure load significantly. They can be ignored for most structures, but things like shell structures are imperfection sensitive. So there is an actual cause for why they buckle - just that we cannot pin-point it. It's epistemologically, but not ontologically vague. This is exactly how we were taught this in University, and how it makes sense. If a professor said that the structure is ontologically vague, and that's why there is no discernible reason for buckling, we wouldn't have understood much of anything, because it doesn't make much sense :s - it's illogical. How can you have an illogical metaphysics?

    Oh and by the way, the above is all tested. We can engineer structures to have imperfections at certain locations, and then test them - and guess what, we see that they fail where the imperfections are. So clearly it's nothing to do with some vagueness, fluctuations and the like...

    (Well, I've already said why - creating a "safe" distance from fundamental uncertainty by employing informal or heuristic coarse-graining.)apokrisis
    Right, or rather creating a safe distance from the area that we cannot know very well, since our models and theories do not permit us to. That's also a possibility, one that seems to be more logically coherent.
  • apokrisis
    6.8k
    Okay, but I fail to see how this changes anythingAgustino

    So you don't understand dimensionless quantities. Cool. https://simple.m.wikipedia.org/wiki/Dimensionless_quantity

    So there is an actual cause for why they buckle - just that we cannot pin-point it. It's epistemologically, but not ontologically vague.Agustino

    Whoosh. Ideas just go over your head.

    Imperfections are just another name for material accidents or uncontrolled fluctuations. The argument is about why the modellling of reality might be coarse graining all the way down. The reason is that imperfection or fluctuation can only be constrained, not eliminated. Hence this being the ontological conclusion that follows from epistemic observation.

    Our models that presume a world that is concrete at base don't work. In real life, we have to have safety margins. Even then, fluctuations of any scale are possible in true non-linear systems with powerlaw statistics. So we can draw our ontological conclusions from a real world failure.

    in other words, the radical potentiality for a chair to change into an elephant, as an example.Agustino

    Now you are really just making shit up.
  • Agustino
    11.2k
    So you don't understand dimensionless quantities. Cool. https://simple.m.wikipedia.org/wiki/Dimensionless_quantityapokrisis
    No, I do understand dimensionless quantities quite well, thank you. What I don't understand is your silly metaphorical fancy of treating a dimensionless quantity as a "1 unit", which is then somehow also a "bare potential" :s .

    Imperfections are just another name for material accidents or uncontrolled fluctuations.apokrisis
    Material accidents are not "uncontrolled fluctuations" :s

    The reason is that imperfection or fluctuation can only be constrained, not eliminated.apokrisis
    That would be a methodological limitation of our manufacturing techniques, it would definitely not be an ontological limitation of reality itself...

    Hence this being the ontological conclusion that follows from epistemic observation.apokrisis
    Right, so you are willing to accept ontological contradictions. Why aren't you going to accept square circles then, and other contradictions? Maybe at the level of those fluctuations squares and circles aren't all that different anymore - there's some vague square circles :s

    Now you are really just making shit up.apokrisis
    :-} The point there was simply that any object has to potential to become another - the elephant is made of atoms, as is the chair, now supposing there are sufficient atoms in one as in the other, all it would take would be a rearrangement of them - in other words, a new form. That's what Aristotle meant by prime matter - but prime matter only applies to already actual objects - it doesn't exist in-itself, abstracted away from such objects.
  • apokrisis
    6.8k
    No, I do understand dimensionless quantities quite well,Agustino

    Clearly you just don't.

    a dimensionless quantity (or more precisely, a quantity with the dimensions of 1) is a quantity without any physical units and thus a pure number.

    You are convincing me it is essentially pointless discussing this with you as you are either just being pig-headed or you lack the necessary understanding of how maths works.

    Material accidents are not "uncontrolled fluctuations" :sAgustino

    Just stop a minute and notice how you mostly wind up making simple negative assertions in the fashion of an obstinate child. No it ain't, no it ain't. Then throw in an emoticon as if your personal feelings are what concludes any argument.

    I find replying to you quite a chore. You try to close down discussions while pretending to be continuing them with lengthy responses. It is like hoping for a tennis match with someone who just wants to spend forever knocking up.

    That would be a methodological limitation of our manufacturing techniques, it would definitely not be an ontological limitation of reality itself...Agustino

    So you claim in unsupported fashion, ignoring the supported argument I just made in the other direction.

    Right, so you are willing to accept ontological contradictions. Why aren't you going to accept square circles then, and other contradictions? Maybe at the level of those fluctuations squares and circles aren't all that different anymore - there's some vague square circlesAgustino

    And your problem is?

    Vagueness is as much circular as it is square. The PNC does not apply. Just like it says on the box.

    The point there was simply that any object has to potential to become another - the elephant is made of atoms, as is the chair, now supposing there are sufficient atoms in one as in the other, all it would take would be a rearrangement of them - in other words, a new form.Agustino

    Ah right. Atoms. :s

    Oh look, I just disproved your argument with an emoticon.

    Get back to me when you have figured out that atoms are a coarse grain notion according to modern physics.
  • Metaphysician Undercover
    12.3k
    Sure we framed them to explain the world as we have found it. The deeper question is why the existence of that intelligible world?apokrisis

    Correct, we find the existence of an intelligible world, and the laws of logic are developed to help us understand that world. We have no reason to believe that the world ever was anything other than intelligible, because experience demonstrates to us that it is intelligible..

    If the laws were merely social constructs, they would hardly hold a foundational place in our methods of reasoning.apokrisis

    The laws are expressed in words, therefore they are human constructs. I don't see how they could be anything other than that. They are foundational in the sense that they are created to support conceptual structures, just like the foundations of buildings are created to support structures. I don't see what you are trying to claim. The laws of logic might in some way represent some real, independent aspects of the universe, or be a reflection of the reality of the universe, but these laws are still artificial, human constructs which reflect whatever that reality is.

    That the laws of logic work, is evidence that there is such a reality. But to proceed from this, to the assumption that there was a time when there was not such a reality, is what I see as irrational. The passing of time itself is an intelligible order, so to claim that there was a time when there was no intelligible order, is irrational.

    Fer fuck's sakes. If existence isn't eternal, it must have developed or been created. Being created doesn't work as that leads to infinite regress in terms of claims about first causes. So development is the metaphysical option worth exploring - rather than being pig-headed about, as is your wont.apokrisis

    You have reduced the existence of the universe to three options, eternal, created, or developed. I'm sure an imaginative mind could come up with more options, but I'll look at these three.

    The infinite regress you refer to is only the result of assuming efficient cause, and this infinite regress is no different from "eternal". The first cause of intention of a creator, which is commonly referred to as "final cause", does not produce an infinite regress. It is introduced as an alternative to infinite regress. The act of the free will is carried out for a purpose, and that purpose is the final reason, there is no infinite regress, so long as you respect the finality of purpose. So your claim that creation leads to infinite regress is false.

    The "development" of a universe with intelligible order, emerging from no order, does not make any sense without invoking a developer. So this option leads to the need to assume a creator as well. Your mistake is that you attempt to remove the developer from the development, so you end up with irrational nonsense.

    And then cosmology gives good support to that metaphysical reasoning. Look back to the Big Bang and you don't see much evidence for the existence of a collection of objects.apokrisis

    The Big Bang theory only demonstrates that current, conventional theories in physics are unable to understand the existence of the universe prior to a certain time. The Big Bang theory is just the manifestation of inadequate physical theories. It says very little about the actual universe accept that the universe is something which our theories are incapable of understanding. To attribute the "vagueness" which results from the inadequacies of one's theories, to the thing which the theories are being applied to, in an attempt to understand, is a category mistake.
  • Agustino
    11.2k
    Okay, whatever. You're not actually interested in having your views questioned and thinking through them honestly.

    And your problem is?

    Vagueness is as much circular as it is square. The PNC does not apply. Just like it says on the box.
    apokrisis
    Yeah, I find that contradictory, and contradictions are by definition impossible. If you will allow contradictions in your system of thought, there's no way to make heads from tails anymore - that's completely irrational.
  • Metaphysician Undercover
    12.3k
    You're not actually interested in having your views questioned and thinking through them honestly.Agustino

    That's the conclusion I came to a few days back. Apokrisis is very convinced that the position expressed is the correct one. So no matter how many times the illogical, irrationalities of that position are pointed out, apokrisis just continues to assert, this is the way things are because I adhere to Peirce's principles. There is a complete disrespect for all the problems which are pointed out. It is an act of self-imposed ignorance.
  • apokrisis
    6.8k
    You're not actually interested in having your views questioned and thinking through them honestly.Agustino

    I'm waiting for you to get the ball over the net. I see a lot of swishing and grunting but not much result.

    To remind you of the essence of where the argument had got to, your own point about engineering is that it can't trust perfect world maths. Even statistical methods are risky as they still coarse grain over the metaphysical reality.

    A linear model of average behaviour is going to be fundamentally inaccurate if the average behaviour is in fact non-linear or powerlaw. At least a Gaussian distribution does have a mean. A powerlaw (or fractal) distribution has exceptions over all scales.

    This gets quite critical where engineering has to engage with real-life dissipative systems like plate tectonics. Earthquake building codes and land-use planning really have to do some smart thinking about the true nature of natural hazard risks.

    So how engineering papers over the cracks in mathematical modelling is important here. That is a heuristic tale in itself. Eventually even statistical methods become so coarse grained they no longer offer a concrete mean. The central limit theorem no longer applies.

    But I was focused on the foundations of the modelling - the starting presumption that there is some definite micro-level of causality. My argument is that it is coarse graining all the way down. What we find as an epistemic necessity is also an ontic necessity.

    Now you can argue that the mathematical presumption of micro-level counterfactual definiteness - atomism - is in fact the correct ontology here. Great. But it is mysterious how you don't pick up the contradiction between you saying that the presumptions of maths can't be trusted epistemically, and yet those very same presumptions must be ontologically true.

    Your position metaphysically couldn't be more arse about face - the technical description of naive realism.

    So really, until you understand just how deeply confused you are about your own argument, it is hard to have much of a discussion.
  • apokrisis
    6.8k
    At least you stay focused on the matter at hand. You aren't just seeking to divert the discussion to safe irrelevancies.

    We don't have to agree. And where would be the fun if we did?
  • apokrisis
    6.8k
    The first cause of intention of a creator, which is commonly referred to as "final cause", does not produce an infinite regress.Metaphysician Undercover

    Creation by a creator is efficient cause masquerading as something else. It doesn't offer a causal explanation because if creations demand a creator, who creates the creator? The infinite regress is just elevated to a divine plane of being.

    And it doesn't even explain how the wishes of a supernatural being can get expressed as material events. Sure, somehow there must be a "miraculous" connection if the story is going to work. But there just isn't that explanation of how it does work.

    So anthropomorphic creators fail both to explain their own existence and how they achieve anything material.

    Yes, I know that this then gives rise to thickets of theological boilerplate to cover over the essential lack of any explanation. But I'm saying let's cut the bullshit.

    But to proceed from this, to the assumption that there was a time when there was not such a reality, is what I see as irrational.Metaphysician Undercover

    So causal stories of development and evolution are irrational. Claims of brute existence are rational. Gotcha.

    The "development" of a universe with intelligible order, emerging from no order, does not make any sense without invoking a developer.Metaphysician Undercover

    You mean constraints? Those things which could emerge due to development?

    Why revert to talking in terms of efficient cause - the developer - when the missing bit of the puzzle is the source of the finality? You already agreed that efficient causes only result in infinite regress.

    The Big Bang theory only demonstrates that current, conventional theories in physics are unable to understand the existence of the universe prior to a certain time.Metaphysician Undercover

    You mean where physics has got to is knowing that a Newtonian notion of time has to be inadequate. That is the new big project. Learning to understand time as a thermal process.

    The search for a final theory of quantum gravity is the search for how time and space could emerge as constraints to regulate quantum fluctuations and produce a Universe that is asymptotically classical.

    So exactly what I've been arguing. And what Peirce foresaw in his metaphysics.

    When our smartest modern metaphysician and the full weight of our highly successful physics community agree on something in terms of ontology, that seems a good reason to take it seriously, don't you think?

    (I realise you will reply, nope its irrational, while Augustino eggs you on from the sidelines with some frantic emoticon eye-rolling.)
  • Metaphysician Undercover
    12.3k
    Creation by a creator is efficient cause masquerading as something else.apokrisis

    Have you never heard of the concept of free will? This is a cause which is not an efficient cause.

    Sure, somehow there must be a "miraculous" connection if the story is going to work.apokrisis

    You can call free will miraculous if you want, I prefer to call it final cause.

    When our smartest modern metaphysician and the full weight of our highly successful physics community agree on something in terms of ontology, that seems a good reason to take it seriously, don't you think?apokrisis

    Where do you find this "smartest modern metaphysician"? If you mean Peirce, I can only take that as a joke.
  • apokrisis
    6.8k
    Have you never heard of the concept of free will? This is a cause which is not an efficient cause.Metaphysician Undercover

    I have explored that cultural fiction in great detail.

    You can call free will miraculous if you want, I prefer to call it final cause.Metaphysician Undercover

    No. I call it a cultural fiction.

    Where do you find this "smartest modern metaphysician"? If you mean Peirce, I can only take that as a joke.Metaphysician Undercover

    You have to pretend to be laughing. Otherwise you might have to reconsider your views.

    See how freewill works? It is mostly the power to say no even when by rights you should be saying yes. It is the way people justify their irrationality.
  • Metaphysician Undercover
    12.3k

    When you call free will a cultural fiction, I know you have very little metaphysical education. And that explains why you would say that Peirce is the smartest modern metaphysician, you really don't know what metaphysics is.

    See how freewill works? It is mostly the power to say no even when by rights you should be saying yes. It is the way people justify their irrationality.apokrisis

    There must be a reason for the existence of irrationality. If the concept of free will explains why there is such irrationality, then that's good evidence that free will is more than just a fiction.
  • T Clark
    13k
    Peter Hoffman's Life's Ratchet is another good new read if you want to understand how informational mechanism can milk the tremendous free energy available at the molecular scale. Life goes from surprising to inevitable once you realise how strongly it is entropically favoured.apokrisis

    In the case of the spatio-temporal regulation of protein folding for instance, while the exact mechanisms are still being worked out, the dynamics have to do, ultimately, with physical forces acting on the amino acids - forces like energy and chemical differentials/gradients, hydrophobic and electrostatic forces, binding and bending energies, as well as ambient conditions like pH, temperature and ion concentration. As Peter Hoffmann puts it, "a large part of the necessary information to form a protein is not contained in DNA, but rather in the physical laws governing charges, thermodynamics, and mechanics. And finally, randomness is needed to allow the amino acid chain to search the space of possible shapes and to find its optimal shape." - The 'space of possible shapes' that Hoffmann refers to is the so-called 'energy landscape' that a protein explores while folding into its final shape, where it settles into energy-optimal state after making it's way through a few different possible configurations (different configurations 'cost' different amounts of energy, and cells regulate things so that the desired protein form settles into the 'right' energy state). (Quote from Hoffmann's Life's Ratchet).StreetlightX

    Note - This response is to a couple of threads that ended months ago. I just wanted to follow up.

    I finally finished "Life's Ratchet." It changed the way I think about living and non-living matter. You know - everything. It makes the transition from non-living to living seem, if not inevitable, at least unsurprising. This has only happened to me once or twice in my life - I learn something I had never conceived of before and immediately think "Of course that's how it works. It couldn't work any other way."

    Ideas that were new to me:
    • The molecular storm - I'm used to thinking of molecules bouncing around in a box like billiard balls. Instead, it's like 100 hurricanes and tornadoes blasting at the same time. And chemical reactions, life, have to take place in a complex series of transitions while it's blowing without being blasted apart.
    • Self-assembly - Of course DNA doesn't fully specify life. It get's things started by creating proteins and the rest happens all by itself in accordance with normal physical and chemical processes - protein folding, chemical reactions, enzymes.
    • Evolution - Hoffman gives a plausible reason why it took 2.5 billion years for multi-cellular life to evolve after the first single-cell organisms appeared - During all that time, evolution was taking place inside the cells to develop the incredibly complex ecology of chemical and physical processes required to make multi-cellular life possible.
    • Molecular machines - That term is just a metaphor for some particularly complex chemical reactions. No, it's not!!!. These are actual, physical machines with springs, tanks, and pumps that run on tracks and work according to the same principles normal macro-scale machines do. We're used to talking about weird behavior at atomic scales. These are amazing, but completely not weird.
    • Energy transitions - I've always wondered how food get's transformed into action in a cell. One major mechanism is adenosine triphosphate releasing energy by kicking out a phosphorus atom. That energy is used to power molecular machines.

    Great book. Thanks for the reference.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.