• apokrisis
    2.5k
    The laws of logic were produced, and developed by human beings.Metaphysician Undercover

    Sure we framed them to explain the world as we have found it. The deeper question is why the existence of that intelligible world? If the laws were merely social constructs, they would hardly hold a foundational place in our methods of reasoning.

    . The claim that there was a time when the universe didn't consist of a collection of objects would need to be justifiedMetaphysician Undercover

    Fer fuck's sakes. If existence isn't eternal, it must have developed or been created. Being created doesn't work as that leads to infinite regress in terms of claims about first causes. So development is the metaphysical option worth exploring - rather than being pig-headed about, as is your wont.

    And then cosmology gives good support to that metaphysical reasoning. Look back to the Big Bang and you don't see much evidence for the existence of a collection of objects.
  • Agustino
    8.4k
    As a dichotomous growth process, they directly model this issue of convergence towards a limit that I stressed in earlier posts.

    Think about the implications of that for a theory of cosmic origination. It argues that a world that can arise from a symmetry breaking - a going in both its dichotomous directions freely - does in fact have its own natural asymptotic cut off point. The Planck scale Big Bang is not a problem but a prediction. Run a constantly diverging process back in time to recover its initial conditions and you must see it converging at a point at the beginning of time.

    This has in fact been argued as a theorem in relation to Linde's fractal spawning multiverse hypothesis. So if inflation happens to be true and our universe is only one of a potential infinity, the maths still says the history of the multiverse must converge at some point at the beginning of time. It is a truly general metaphysical result.
    apokrisis
    Yes, that is indeed an interesting implication of any growth process that depends on symmetry breakings - it must ultimately reduce itself to a beginning point.

    Another way to illustrate this is how we derive the constant of growth itself - e. Run growth backwards and it must converge on some unit 1 process that started doing the growing. Thus what begins things has no actual size. It is always just 1 - the bare potential of whatever fluctuation got things started. So a definite growth constant emerges without needing any starting point more definite than a fleeting one-ness.apokrisis
    I don't follow how "what begins" has no actual size. Fractals always have some size. Even the simplest ones like Koch curve start from some definite size of a simple line segment. But see the absence of a definitive perimeter, combined with things like having no tangents at any points, make such fractals strange mathematical objects, which may approximate some real objects, but not in this lack of definitiveness.

    Now you are repeating what I have disputed. And I have provided the rationale for my position. So instead of just citing scholastic aristoteleanism to me, as if that could make a difference, just move on and consider my actual arguments.apokrisis
    Okay. But I don't see anything in your position that could elude what Aristotle has determined. You have redefined the terms, but this redefinition does not save you from the requirement that there is a prior act to all potency (using these terms to mean what Aristotle meant by them). As I said, it makes sense when you say that everything reduces to a primal fluctuation. I can understand that. But I cannot understand the movement from primal fluctuation to ontic vagueness - that sounds contradictory to me.

    It is also important to see that Peirce's mathematical conceptions are based on the duality of generality and vagueness. So you can both have a general continuum limit and also find that it has potential infinity in terms of its divisibility. In fact, you've got to have both.apokrisis
    Yeah, much of actual math is done this way. The difficulties of infinite divisibility and the like are avoided through limit calculus in practice while doing math. This is fine so long as you are aware that you're just doing math. Limit calculus enables you to perform through an infinity of operations and arrive at a definite answer - in some cases, those which are convergent. Not all are though, and the cases where there exist problems in physics - such as the Big Bang singularity, are precisely those cases where limits are divergent. Again, such issues illustrate discrepancies between mathematical models and reality.

    And funnily enough, real space is like that. Just look at how we have to have the duality of general relativity and quantum mechanics to account for it fully. One describes the global continuity of the constraints, the other, the local infinite potential, the inherent uncertainty that just keeps giving.apokrisis
    Well, we're not sure, we have to wait for quantum gravity to be more fully developed to see what's what. If there is a quantum theory of gravity, then GR will be reduced to it, as would be natural, in my opinion. It's absurd to have a macro theory that cannot be shown to emerge from the micro level.

    And yet an engineer has a metaphysics. He believes in a world of clockwork Newtonian forces. That is the right maths. And on the whole it works because the universe - at the scale at which the engineer operates - is pretty much just "classical". There is no ontic vagueness to speak of.apokrisis
    No, I'm not sure that he believes in the "clockwork Newtonian universe". Depends on what you are engineering. Standard structures will be engineered according to the Newtonian clockwork view of the universe, because it's a close enough approximation - especially when you put factors of safety on top of it.

    But there are many non-standard structures - suspension bridges, skyscrapers in earthquake-prone regions, shell structures and the like which are definitely not engineered according to clockwork Newtonian views. These structures are very difficult to analyse and they can be very sensitive to imperfections. They display dynamic, non-linear behaviour under loads, which is more difficult to analyse because positive feedback loops between an acting load and the response of the structure can be generated, which can lead to collapse - Tacoma Narrows, things like Fokker monoplanes, and the like.

    These structures are generally analysed by computers under different scenarios, with different possible failure mechanisms taken into consideration. Evolutionary algorithms may also be used to determine the right values for certain parameters. Determining the failure mechanisms that should be tested is largely about intuition though ;) .

    Of course, the beam will buckle unpredictably. An engineer has to know the practical limits of his classically-inspired mathematical tools. The engineer will say in theory, every micro-cause contributing to the failure of the beam could be modelled by sufficiently complex "non-linear" equations. The issue of coarse graining - the fact that eventually the engineer will insert himself into the modelling as the observer to decide when to just average over the events in each region of space - is brushed off as a necessary heuristic and not an epistemic embarrassment.apokrisis
    Yes, the phenomenon of buckling is more complicated than our lower bound calculations suggest. Non-linear effects do start to play a role, and there are other mechanisms too - in reinforced concrete beams for example, a phenomenon known as arching can develop making the behaviour of the beam plastic and permitting it to withstand more load than predicted.

    Which is why real world engineering projects fail so regularlyapokrisis
    Actually, real world engineering projects most often are overdeisgned. We just hear about the failures more often than not, but the many successes are forgotten. When you use lower bound approaches combined with factors of safety of 1.5 for structures, and up to 3-4 sometimes for foundations, you are bound to overdesign to a certain extent. Basically whatever answer you calculate you will multiply by the factor of safety to really make sure it's safe - and you are pretty much forced to do so by legislation in many countries, just because failure can lead to death. So better safe than sorry - better to be humble and expect that you don't know than to have false pretences to knowledge.

    Real world structures which do collapse or fail likely do so because they involve an upper bound method of calculation, and the lowest failure mechanism wasn't thought about or taken into account. For example, the World Trade Centers were actually designed to withstand a plane impact. But the actual failure mechanism was never taken into account. They didn't think that if the plane strikes at the right height of the building, the fire can progressively cause steel floors to collapse, and once one floor collapses, the effective height of the steel columns doubles, which means that buckling load becomes 1/4 of what it used to be before (not even taking into account the effect of temperature rise on the columns). So if even more floors collapse, then the demolition-looking collapse of the world trade centre is inevitable as the main columns buckle, and the top part of the building comes crashing down on the bottom part. Even with factors of safety, this mechanism would lead to collapse. So nobody thought about it. And the structure failed.

    There's a lot we still have to learn. Whenever we do a controlled demolishing of a bridge, we often load it to see at what load it actually fails compared to what we predict. They often fail at higher loads - there's a lot left to understand about structures.
  • apokrisis
    2.5k
    Fractals always have some size. Even the simplest ones like Koch curve start from some definite size of a simple line segment.Agustino

    Sure. To model, we need to start at some initial scale. My point was that log e, or Euler's number, shows how we can just start with "unit 1" as the place to start things.

    It may seem like you always have to start your simulation with some definite value. But actually the maths itself abstracts away this apparent particularity by saying whatever value you start at, that is 1. The analysis is dimensionless rather than dimensioned. Even if we have to "stick in a number" to feed the recursive equation.

    You have redefined the terms, but this redefinition does not save you from the requirement that there is a prior act to all potency (using these terms to mean what Aristotle meant by them).Agustino

    Nope. This is the big misunderstanding.

    Sure, irregularity being constrained is what produces the now definite possibilities or degrees of freedom. Once a history has got going, vague "anythingness" is no longer possible. Anything that happens is by definition limited and so is characterised by a counterfactually. Spontaneity or change is always now in some general direction.

    So there is potential in the sense of material powers or material properties - the things that shaped matter is liable to do (defined counterfactually in terms of what it likewise not going to be doing).

    But Aristotle tried to make sense of the bare potential of prime matter. As we know, that didn't work out so well.

    Peirce fixes that by a logic of vagueness. Now both formal and material cause are what arise in mutual fashion from bare potential. They are its potencies. Before the birth of concrete possibility - the kind of historically in-formed potential that you have in mind - there was the pure potential which was a pre-dichotomised vagueness.

    Prime mover and prime matter are together what would be latent in prime potential. Hence this being a triadic and developmental metaphysics - what Aristotle was shooting for but didn't properly bring off.

    It's absurd to have a macro theory that cannot be shown to emerge from the micro level.Agustino

    You keep coming back to a need to believe in a concrete beginning. It is the presumption that you have not yet questioned in the way Peirce says you need to question.

    Until you can escape that, you are doomed to repeat the same conclusions. But its your life. As you say, engineering might be good enough for you. Metaphysics and the current frontiers of scientific theory may just not seem very important.

    Yes, the phenomenon of buckling is more complicated than our lower bound calculations suggest.Agustino

    But you still do believe there is a concrete bottom level to these non-linear situations right? It's still absurd to suggest the emergent macro theory doesn't rest on a bed of definite micro level particulars?

    I mean, drill down, and eventually you will find that you are no longer just coarse-graining the model. You are describing the actual grain on which everything rests?

    People say that the storm in Brazil was caused by the flap of a butterfly wing in Maryland. And you accept it was that flap. The disturbance couldn't have been anything smaller, like the way the butterfly stroked its antenna or faintly shifted a leg?

    I mean deterministic chaos theory doesn't have to rely on anything like the shadowing lemma to underpin its justification of coarse graining "all the way down"?

    In other words, the maths of non-linearity works, to the degree it works, by coping with the reality that there is no actual concrete micro-level on which to rest. And that argues against the picture of physical reality you are trying to uphold.

    The beam buckles because of a "fluctuation". Another way of saying "for no discernible reason at all". Anything and everything could have been what tipped the balance. So the PNC fails to apply and we should just accept that your micro-level just describes the vagueness of unformed action.

    Actually, real world engineering projects most often are overdeisgned.Agustino

    I wonder why. (Well, I've already said why - creating a "safe" distance from fundamental uncertainty by employing informal or heuristic coarse-graining.)

    Real world structures which do collapse or fail likely do so because they involve an upper bound method of calculation, and the lowest failure mechanism wasn't thought about or taken into account.Agustino

    Thanks for the examples, but I know more than a little bit about engineering principles. And you are only confirming my arguments about the reality that engineers must coarse-grain over the best way they can.
  • Agustino
    8.4k
    Sure. To model, we need to start at some initial scale. My point was that log e, or Euler's number, shows how we can just start with "unit 1" as the place to start things.

    It may seem like you always have to start your simulation with some definite value. But actually the maths itself abstracts away this apparent particularity by saying whatever value you start at, that is 1. The analysis is dimensionless rather than dimensioned. Even if we have to "stick in a number" to feed the recursive equation.
    apokrisis
    Okay, but I fail to see how this changes anything :s - I mean sure, you can use whatever number system you want, so effectively you always start with "unit 1" if that's what you want. But how does this change the fact that there is a definitive size to this beginning, regardless of the number/measuring system you choose to use, and hence what you use as the standard for 1 unit?

    Sure, irregularity being constrained is what produces the now definite possibilities or degrees of freedom. Once a history has got going, vague "anythingness" is no longer possible.apokrisis
    Why should I think that this vague "anythingness" was ever possible?

    But Aristotle tried to make sense of the bare potential of prime matter. As we know, that didn't work out so well.apokrisis
    Why do you say it didn't work out? Aristotle showed that prime matter is impossible to exist in-itself. But prime matter does exist in the sense of the underlying potentiality for anything already actual to be other than it is - in other words, the radical potentiality for a chair to change into an elephant, as an example.

    Now both formal and material cause are what arise in mutual fashion from bare potential. They are its potencies.apokrisis
    So is "bare potential" actual?

    Prime mover and prime matter are together what would be latent in prime potential.apokrisis
    :s - if Prime Mover is the potentiality of something else, then it is not Prime Mover anymore. Prime Mover would be whatever lies behind and is pure act.

    You keep coming back to a need to believe in a concrete beginning. It is the presumption that you have not yet questioned in the way Peirce says you need to question.apokrisis
    Yes, because other beginnings are logically contradictory and impossible, just like square circles are impossible.

    But you still do believe there is a concrete bottom level to these non-linear situations right?apokrisis
    Yes, I do believe there is a non-contradictory underlying reality.

    The beam buckles because of a "fluctuation". Another way of saying "for no discernible reason at all". Anything and everything could have been what tipped the balance.apokrisis
    No, absolutely not. The phenomenon of buckling in these non-linear ways is most commonly seen in shell structures. What happens is that there are imperfections in the structure (not perfectly round, etc.). And these tiny imperfections reduce the failure load significantly. They can be ignored for most structures, but things like shell structures are imperfection sensitive. So there is an actual cause for why they buckle - just that we cannot pin-point it. It's epistemologically, but not ontologically vague. This is exactly how we were taught this in University, and how it makes sense. If a professor said that the structure is ontologically vague, and that's why there is no discernible reason for buckling, we wouldn't have understood much of anything, because it doesn't make much sense :s - it's illogical. How can you have an illogical metaphysics?

    Oh and by the way, the above is all tested. We can engineer structures to have imperfections at certain locations, and then test them - and guess what, we see that they fail where the imperfections are. So clearly it's nothing to do with some vagueness, fluctuations and the like...

    (Well, I've already said why - creating a "safe" distance from fundamental uncertainty by employing informal or heuristic coarse-graining.)apokrisis
    Right, or rather creating a safe distance from the area that we cannot know very well, since our models and theories do not permit us to. That's also a possibility, one that seems to be more logically coherent.
  • apokrisis
    2.5k
    Okay, but I fail to see how this changes anythingAgustino

    So you don't understand dimensionless quantities. Cool. https://simple.m.wikipedia.org/wiki/Dimensionless_quantity

    So there is an actual cause for why they buckle - just that we cannot pin-point it. It's epistemologically, but not ontologically vague.Agustino

    Whoosh. Ideas just go over your head.

    Imperfections are just another name for material accidents or uncontrolled fluctuations. The argument is about why the modellling of reality might be coarse graining all the way down. The reason is that imperfection or fluctuation can only be constrained, not eliminated. Hence this being the ontological conclusion that follows from epistemic observation.

    Our models that presume a world that is concrete at base don't work. In real life, we have to have safety margins. Even then, fluctuations of any scale are possible in true non-linear systems with powerlaw statistics. So we can draw our ontological conclusions from a real world failure.

    in other words, the radical potentiality for a chair to change into an elephant, as an example.Agustino

    Now you are really just making shit up.
  • Agustino
    8.4k
    So you don't understand dimensionless quantities. Cool. https://simple.m.wikipedia.org/wiki/Dimensionless_quantityapokrisis
    No, I do understand dimensionless quantities quite well, thank you. What I don't understand is your silly metaphorical fancy of treating a dimensionless quantity as a "1 unit", which is then somehow also a "bare potential" :s .

    Imperfections are just another name for material accidents or uncontrolled fluctuations.apokrisis
    Material accidents are not "uncontrolled fluctuations" :s

    The reason is that imperfection or fluctuation can only be constrained, not eliminated.apokrisis
    That would be a methodological limitation of our manufacturing techniques, it would definitely not be an ontological limitation of reality itself...

    Hence this being the ontological conclusion that follows from epistemic observation.apokrisis
    Right, so you are willing to accept ontological contradictions. Why aren't you going to accept square circles then, and other contradictions? Maybe at the level of those fluctuations squares and circles aren't all that different anymore - there's some vague square circles :s

    Now you are really just making shit up.apokrisis
    :-} The point there was simply that any object has to potential to become another - the elephant is made of atoms, as is the chair, now supposing there are sufficient atoms in one as in the other, all it would take would be a rearrangement of them - in other words, a new form. That's what Aristotle meant by prime matter - but prime matter only applies to already actual objects - it doesn't exist in-itself, abstracted away from such objects.
  • apokrisis
    2.5k
    No, I do understand dimensionless quantities quite well,Agustino

    Clearly you just don't.

    a dimensionless quantity (or more precisely, a quantity with the dimensions of 1) is a quantity without any physical units and thus a pure number.

    You are convincing me it is essentially pointless discussing this with you as you are either just being pig-headed or you lack the necessary understanding of how maths works.

    Material accidents are not "uncontrolled fluctuations" :sAgustino

    Just stop a minute and notice how you mostly wind up making simple negative assertions in the fashion of an obstinate child. No it ain't, no it ain't. Then throw in an emoticon as if your personal feelings are what concludes any argument.

    I find replying to you quite a chore. You try to close down discussions while pretending to be continuing them with lengthy responses. It is like hoping for a tennis match with someone who just wants to spend forever knocking up.

    That would be a methodological limitation of our manufacturing techniques, it would definitely not be an ontological limitation of reality itself...Agustino

    So you claim in unsupported fashion, ignoring the supported argument I just made in the other direction.

    Right, so you are willing to accept ontological contradictions. Why aren't you going to accept square circles then, and other contradictions? Maybe at the level of those fluctuations squares and circles aren't all that different anymore - there's some vague square circlesAgustino

    And your problem is?

    Vagueness is as much circular as it is square. The PNC does not apply. Just like it says on the box.

    The point there was simply that any object has to potential to become another - the elephant is made of atoms, as is the chair, now supposing there are sufficient atoms in one as in the other, all it would take would be a rearrangement of them - in other words, a new form.Agustino

    Ah right. Atoms. :s

    Oh look, I just disproved your argument with an emoticon.

    Get back to me when you have figured out that atoms are a coarse grain notion according to modern physics.
  • Metaphysician Undercover
    2.7k
    Sure we framed them to explain the world as we have found it. The deeper question is why the existence of that intelligible world?apokrisis

    Correct, we find the existence of an intelligible world, and the laws of logic are developed to help us understand that world. We have no reason to believe that the world ever was anything other than intelligible, because experience demonstrates to us that it is intelligible..

    If the laws were merely social constructs, they would hardly hold a foundational place in our methods of reasoning.apokrisis

    The laws are expressed in words, therefore they are human constructs. I don't see how they could be anything other than that. They are foundational in the sense that they are created to support conceptual structures, just like the foundations of buildings are created to support structures. I don't see what you are trying to claim. The laws of logic might in some way represent some real, independent aspects of the universe, or be a reflection of the reality of the universe, but these laws are still artificial, human constructs which reflect whatever that reality is.

    That the laws of logic work, is evidence that there is such a reality. But to proceed from this, to the assumption that there was a time when there was not such a reality, is what I see as irrational. The passing of time itself is an intelligible order, so to claim that there was a time when there was no intelligible order, is irrational.

    Fer fuck's sakes. If existence isn't eternal, it must have developed or been created. Being created doesn't work as that leads to infinite regress in terms of claims about first causes. So development is the metaphysical option worth exploring - rather than being pig-headed about, as is your wont.apokrisis

    You have reduced the existence of the universe to three options, eternal, created, or developed. I'm sure an imaginative mind could come up with more options, but I'll look at these three.

    The infinite regress you refer to is only the result of assuming efficient cause, and this infinite regress is no different from "eternal". The first cause of intention of a creator, which is commonly referred to as "final cause", does not produce an infinite regress. It is introduced as an alternative to infinite regress. The act of the free will is carried out for a purpose, and that purpose is the final reason, there is no infinite regress, so long as you respect the finality of purpose. So your claim that creation leads to infinite regress is false.

    The "development" of a universe with intelligible order, emerging from no order, does not make any sense without invoking a developer. So this option leads to the need to assume a creator as well. Your mistake is that you attempt to remove the developer from the development, so you end up with irrational nonsense.

    And then cosmology gives good support to that metaphysical reasoning. Look back to the Big Bang and you don't see much evidence for the existence of a collection of objects.apokrisis

    The Big Bang theory only demonstrates that current, conventional theories in physics are unable to understand the existence of the universe prior to a certain time. The Big Bang theory is just the manifestation of inadequate physical theories. It says very little about the actual universe accept that the universe is something which our theories are incapable of understanding. To attribute the "vagueness" which results from the inadequacies of one's theories, to the thing which the theories are being applied to, in an attempt to understand, is a category mistake.
  • Agustino
    8.4k
    Okay, whatever. You're not actually interested in having your views questioned and thinking through them honestly.

    And your problem is?

    Vagueness is as much circular as it is square. The PNC does not apply. Just like it says on the box.
    apokrisis
    Yeah, I find that contradictory, and contradictions are by definition impossible. If you will allow contradictions in your system of thought, there's no way to make heads from tails anymore - that's completely irrational.
  • Metaphysician Undercover
    2.7k
    You're not actually interested in having your views questioned and thinking through them honestly.Agustino

    That's the conclusion I came to a few days back. Apokrisis is very convinced that the position expressed is the correct one. So no matter how many times the illogical, irrationalities of that position are pointed out, apokrisis just continues to assert, this is the way things are because I adhere to Peirce's principles. There is a complete disrespect for all the problems which are pointed out. It is an act of self-imposed ignorance.
  • apokrisis
    2.5k
    You're not actually interested in having your views questioned and thinking through them honestly.Agustino

    I'm waiting for you to get the ball over the net. I see a lot of swishing and grunting but not much result.

    To remind you of the essence of where the argument had got to, your own point about engineering is that it can't trust perfect world maths. Even statistical methods are risky as they still coarse grain over the metaphysical reality.

    A linear model of average behaviour is going to be fundamentally inaccurate if the average behaviour is in fact non-linear or powerlaw. At least a Gaussian distribution does have a mean. A powerlaw (or fractal) distribution has exceptions over all scales.

    This gets quite critical where engineering has to engage with real-life dissipative systems like plate tectonics. Earthquake building codes and land-use planning really have to do some smart thinking about the true nature of natural hazard risks.

    So how engineering papers over the cracks in mathematical modelling is important here. That is a heuristic tale in itself. Eventually even statistical methods become so coarse grained they no longer offer a concrete mean. The central limit theorem no longer applies.

    But I was focused on the foundations of the modelling - the starting presumption that there is some definite micro-level of causality. My argument is that it is coarse graining all the way down. What we find as an epistemic necessity is also an ontic necessity.

    Now you can argue that the mathematical presumption of micro-level counterfactual definiteness - atomism - is in fact the correct ontology here. Great. But it is mysterious how you don't pick up the contradiction between you saying that the presumptions of maths can't be trusted epistemically, and yet those very same presumptions must be ontologically true.

    Your position metaphysically couldn't be more arse about face - the technical description of naive realism.

    So really, until you understand just how deeply confused you are about your own argument, it is hard to have much of a discussion.
  • apokrisis
    2.5k
    At least you stay focused on the matter at hand. You aren't just seeking to divert the discussion to safe irrelevancies.

    We don't have to agree. And where would be the fun if we did?
  • Metaphysician Undercover
    2.7k

    I know, that's just the way it is.
  • apokrisis
    2.5k
    The first cause of intention of a creator, which is commonly referred to as "final cause", does not produce an infinite regress.Metaphysician Undercover

    Creation by a creator is efficient cause masquerading as something else. It doesn't offer a causal explanation because if creations demand a creator, who creates the creator? The infinite regress is just elevated to a divine plane of being.

    And it doesn't even explain how the wishes of a supernatural being can get expressed as material events. Sure, somehow there must be a "miraculous" connection if the story is going to work. But there just isn't that explanation of how it does work.

    So anthropomorphic creators fail both to explain their own existence and how they achieve anything material.

    Yes, I know that this then gives rise to thickets of theological boilerplate to cover over the essential lack of any explanation. But I'm saying let's cut the bullshit.

    But to proceed from this, to the assumption that there was a time when there was not such a reality, is what I see as irrational.Metaphysician Undercover

    So causal stories of development and evolution are irrational. Claims of brute existence are rational. Gotcha.

    The "development" of a universe with intelligible order, emerging from no order, does not make any sense without invoking a developer.Metaphysician Undercover

    You mean constraints? Those things which could emerge due to development?

    Why revert to talking in terms of efficient cause - the developer - when the missing bit of the puzzle is the source of the finality? You already agreed that efficient causes only result in infinite regress.

    The Big Bang theory only demonstrates that current, conventional theories in physics are unable to understand the existence of the universe prior to a certain time.Metaphysician Undercover

    You mean where physics has got to is knowing that a Newtonian notion of time has to be inadequate. That is the new big project. Learning to understand time as a thermal process.

    The search for a final theory of quantum gravity is the search for how time and space could emerge as constraints to regulate quantum fluctuations and produce a Universe that is asymptotically classical.

    So exactly what I've been arguing. And what Peirce foresaw in his metaphysics.

    When our smartest modern metaphysician and the full weight of our highly successful physics community agree on something in terms of ontology, that seems a good reason to take it seriously, don't you think?

    (I realise you will reply, nope its irrational, while Augustino eggs you on from the sidelines with some frantic emoticon eye-rolling.)
  • Metaphysician Undercover
    2.7k
    Creation by a creator is efficient cause masquerading as something else.apokrisis

    Have you never heard of the concept of free will? This is a cause which is not an efficient cause.

    Sure, somehow there must be a "miraculous" connection if the story is going to work.apokrisis

    You can call free will miraculous if you want, I prefer to call it final cause.

    When our smartest modern metaphysician and the full weight of our highly successful physics community agree on something in terms of ontology, that seems a good reason to take it seriously, don't you think?apokrisis

    Where do you find this "smartest modern metaphysician"? If you mean Peirce, I can only take that as a joke.
  • apokrisis
    2.5k
    Have you never heard of the concept of free will? This is a cause which is not an efficient cause.Metaphysician Undercover

    I have explored that cultural fiction in great detail.

    You can call free will miraculous if you want, I prefer to call it final cause.Metaphysician Undercover

    No. I call it a cultural fiction.

    Where do you find this "smartest modern metaphysician"? If you mean Peirce, I can only take that as a joke.Metaphysician Undercover

    You have to pretend to be laughing. Otherwise you might have to reconsider your views.

    See how freewill works? It is mostly the power to say no even when by rights you should be saying yes. It is the way people justify their irrationality.
  • Metaphysician Undercover
    2.7k

    When you call free will a cultural fiction, I know you have very little metaphysical education. And that explains why you would say that Peirce is the smartest modern metaphysician, you really don't know what metaphysics is.

    See how freewill works? It is mostly the power to say no even when by rights you should be saying yes. It is the way people justify their irrationality.apokrisis

    There must be a reason for the existence of irrationality. If the concept of free will explains why there is such irrationality, then that's good evidence that free will is more than just a fiction.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.