Fractals always have some size. Even the simplest ones like Koch curve start from some definite size of a simple line segment. — Agustino
Sure. To model, we need to start at some initial scale. My point was that log e, or Euler's number, shows how we can just start with "unit 1" as the place to start things.
It may seem like you always have to start your simulation with some definite value. But actually the maths itself abstracts away this apparent particularity by saying whatever value you start at, that is 1. The analysis is dimensionless rather than dimensioned. Even if we have to "stick in a number" to feed the recursive equation.
You have redefined the terms, but this redefinition does not save you from the requirement that there is a prior act to all potency (using these terms to mean what Aristotle meant by them). — Agustino
Nope. This is the big misunderstanding.
Sure, irregularity being constrained is what produces the now definite possibilities or degrees of freedom. Once a history has got going, vague "anythingness" is no longer possible. Anything that happens is by definition limited and so is characterised by a counterfactually. Spontaneity or change is always now in some general direction.
So there is potential in the sense of material powers or material properties - the things that shaped matter is liable to do (defined counterfactually in terms of what it likewise not going to be doing).
But Aristotle tried to make sense of the bare potential of prime matter. As we know, that didn't work out so well.
Peirce fixes that by a logic of vagueness. Now both formal and material cause are what arise in mutual fashion from bare potential. They are
its potencies. Before the birth of concrete possibility - the kind of historically in-formed potential that you have in mind - there was the pure potential which was a pre-dichotomised vagueness.
Prime mover and prime matter are together what would be latent in prime potential. Hence this being a triadic and developmental metaphysics - what Aristotle was shooting for but didn't properly bring off.
It's absurd to have a macro theory that cannot be shown to emerge from the micro level. — Agustino
You keep coming back to a need to believe in a concrete beginning. It is the presumption that you have not yet questioned in the way Peirce says you need to question.
Until you can escape that, you are doomed to repeat the same conclusions. But its your life. As you say, engineering might be good enough for you. Metaphysics and the current frontiers of scientific theory may just not seem very important.
Yes, the phenomenon of buckling is more complicated than our lower bound calculations suggest. — Agustino
But you still do believe there is a concrete bottom level to these non-linear situations right? It's still absurd to suggest the emergent macro theory doesn't rest on a bed of definite micro level particulars?
I mean, drill down, and eventually you will find that you are no longer just coarse-graining the model. You are describing the actual grain on which everything rests?
People say that the storm in Brazil was caused by the flap of a butterfly wing in Maryland. And you accept it was that flap. The disturbance couldn't have been anything smaller, like the way the butterfly stroked its antenna or faintly shifted a leg?
I mean deterministic chaos theory doesn't have to rely on anything like the shadowing lemma to underpin its justification of coarse graining "all the way down"?
In other words, the maths of non-linearity works, to the degree it works, by coping with the reality that there is no actual concrete micro-level on which to rest. And that argues against the picture of physical reality you are trying to uphold.
The beam buckles because of a "fluctuation". Another way of saying "for no discernible reason at all". Anything and everything could have been what tipped the balance. So the PNC fails to apply and we should just accept that your micro-level just describes the vagueness of unformed action.
Actually, real world engineering projects most often are overdeisgned. — Agustino
I wonder why. (Well, I've already said why - creating a "safe" distance from fundamental uncertainty by employing informal or heuristic coarse-graining.)
Real world structures which do collapse or fail likely do so because they involve an upper bound method of calculation, and the lowest failure mechanism wasn't thought about or taken into account. — Agustino
Thanks for the examples, but I know more than a little bit about engineering principles. And you are only confirming my arguments about the reality that engineers must coarse-grain over the best way they can.