Comments

  • Maintaining interest in the new 'private' space race.
    I’m questioning your apparent assumption that it would be an attractive enough proposition for people to pay their way there. Even if commercialisation lowered the price, why would anyone choose to go there as a place to live?

    Just list the advantages you are imagining. What are they?
  • Maintaining interest in the new 'private' space race.
    Yeah, and if everything comes down to a matter of what provides the most amount of utility to me, we would all be heroin addicts, yeah?Posty McPostface

    I have no interest in being a heroin addict. But if I had to choose that or being shipped out to a Mars colony for life, then heroin does seem the rosier option.

    I think you’ve fallen for some romantic notion about space travel - that it somehow represents humanity’s best side. But exploration is just the precursor to exploitation. It isn’t noble even if it makes sense to big up those willing to take a risk on behalf of the masses.

    [edit: On second thoughts I’d rather be on Mars than be a heroin addict. The idea of being slave to an addiction that leaves you befuddled in fact has less than zero appeal. Your claim that heroin has any utility, except as pain relief, doesn’t fly with me personally.]
  • Maintaining interest in the new 'private' space race.
    Governments represent national interests. Commerce cashes in on individual desires. The real question is why the heck would anyone want to live on the Moon, or Mars, or anywhere remote and inhospitable? Is there either some national interest or some individual desire?

    The moon race was a result of military self interest and the assertion of national dominance. The US had every reason to plant its flag in the sea of tranquility. But as soon as it had done that, the value of the gesture was over. The US didn’t need a single further Apollo mission. Rocketry had been perfected to the point needed to rain nuclear warheads down on any point of the planet. Colonising the Moon, or heading on to Mars, was a crazy waste of money from the point of view of furthering any national interest. And still is.

    Then private space travel may be a rich person’s thrill ride. But rich people aren’t normally explorers or hermits. They would want to get home to their luxuries after a few weeks. You’d have to be an oddball to want to live on another planet. It’d be the same as living in the middle of a desert or top of a mountain or down in Antarctica. All those are fun to visit. But hardly desirable residences. The commercial real estate opportunities of the Moon or Mars would be even less. So unless it was all about mining, what could pay for it as more than a token kind of business?
  • I am an Ecology
    So I can add to my apokrisis dictionary: what's a vague-crisp distinction when it's at home? And what's the epistemic cut?fdrake

    Vagueness is that to which the principle of non-contradiction fails to obtain. It is ultimate ambiguity in that it is neither a something nor nothing. It is a state of radical indeterminism. The crisp would then be its matching complementary opposite. It would be the absolute determinate, the definite and certain. The PNC would be in full effect.

    Imagine being in a boat on a lake drifting in a fog. Whether you were moving somewhere or going nowhere would be indeterminable. There just wouldn’t be a definite answer either way. But then you suddenly bump the shore. Now you definitely know. Well, either it is now definitely the case you drifted in the shore’s direction or the shore moved and managed to bump into you.

    The epistemic cut is Howard Pattee’s term for the semiotic modelling relation that is the basis of life and mind. He was drawing on von Neuman’s theory of self reproducing automata to talk about the necessary division between a dynamical system and its symbolically-encoded self-description.

    So it is about the separation between observer and observables, laws and initial conditions, software and hardware, genes and metabolism, etc. Or in general, our metaphysical distinction between rate dependent dynamics and rate independent information.

    See - https://www.informatics.indiana.edu/rocha/publications/pattee/pattee.html and
    http://www.academia.edu/863864/The_physics_of_symbols_and_the_evolution_of_semiotic_controls

    For instance, Pattee captures the strangely hybrid metaphysics of the statistical view rather nicely....

    There has always been an apparent paradox between the concept of universal physical laws and semiotic controls. Physical laws describe the dynamics of inexorable events, or as Wigner
    expresses it, physical explanations give us the impression that events ". . . could not be otherwise." By contrast, the concepts of information and control give us the impression that events could be otherwise, and the well-known Shannon measure of information is just the logarithm of the number of other ways.

    One root of this paradox is the fact that the formulation of physical laws depends fundamentally on the concepts of energy, time, and rates of change, whereas information measures and the syntax of formal languages and semiotic controls are independent of energy, time, and rates of change. A second root of the paradox is that fundamental physical laws, as they are described mathematically, are deterministic and time-symmetric (reversible), whereas informational concepts like detection, observation, measurement, and control are described as statistical and irreversible.

    Perhaps the deepest root of the problem, however, is the conceptual incompatibility of the concepts of determinism and choice, a paradox that has existed since the earliest philosophers. The modern attempts in physics to live with this paradox require introducing statistical concepts that allow alternatives into the framework of physical laws by reinterpreting the essential distinction between the laws themselves that describe all possible alternatives and the initial conditions that determine one particular case. Statistical physics accepts the inexorability of the laws, but assumes that virtual alternatives can exist in the microscopic initial conditions.

    One measure of the alternatives is the entropy. Thus, we create imaginary statistical ensembles of systems which all follow the same dynamical laws, but that have different sets of initial conditions. These virtual microscopic states are restricted only by statistical postulates and their consistency with macroscopic state variables.

    A modification of this classical view by Born points out that initial conditions of even one particle can never be measured with formal precision, and therefore even the classical laws of motion can predict only probability distributions for trajectories. Only when a new measurement is made can this distribution be altered.

    The fact remains, however, that all our formal semiotic descriptions and computations, whether we interpret them as probabilistic, statistical, or fuzzy, are in practice assumed to be manipulated by crisp, strictly deterministic rules, even though physical laws require the execution of semiotic rules to be stochastic events.

    The physics of symbols and the evolution of semiotic controls - 1996

    Here is a more recent quote where Pattee makes a full-fledged connection to Peircean semiotics...

    A description requires a symbol system or a language. Functionally, description and construction correspond to the biologists’ distinction between the genotype and phenotype. My biosemiotic view is that self-replication is also the origin of semiosis.

    I have made the case over many years (e.g., Pattee, 1969,1982, 2001, 2015) that self-replication provides the threshold level of complication where the clear existence of a self or a subject gives functional concepts such as symbol, interpreter, autonomous agent, memory, control, teleology, and intentionality empirically decidable meanings. The conceptual problem for physics is that none of these concepts enter into physical theories of inanimate nature

    Self-replication requires an epistemic cut between self and non-self, and between subject and object.

    Self-replication requires a distinction between the self that is replicated and the non-self that is not replicated. The self is an individual subject that lives in an environment that is often called objective, but which is more accurately viewed biosemiotically as the subject’s Umwelt or world image.

    This epistemic cut is also required by the semiotic distinction between the interpreter and what is interpreted, like a sign or a symbol. In physics this is the distinction between the result of a measurement – a symbol – and what is being measured – a material object.

    I call this the symbol-matter problem, but this is just a narrower case of the classic 2500-year-old epistemic problem of what our world image actually tells us about what we call the real world.

    http://www.informationphilosopher.com/solutions/scientists/pattee/
  • Level III Multiverse again.
    It still has to start off homogenous and thermalised at the small scale of the initial conditions. If it was patchy at the start, it couldn’t be now nearly flat. The CMB would not look homogenous and isotropic.
  • Level III Multiverse again.
    The cosmological principle states that each constant-time hypersurface of the universe ('this spacetime') is homogeneous and isotropic at the large scale.andrewk

    Aren't you neglecting that the matter density must be uniform? You have to count the contents too. Spacetime won't be flat unless the matter is presumed to be evenly spread.

    .The Wiki article on the cosmo principle does note that the sun is different from the earth, so that the cosmo principle doesn't apply at such small scales.fishfry

    And remember that Linde's eternal inflation would presume that each bubble universe would start off at a planckscale energy density and so the initial state would be a relativistic gas, a quark-gluon hot soup. So the material content would be at thermal equilibrium. The only fluctuations - the seed forming inhomogeneities that result in the later gravitational/material structure - would be thermal quantum ones.

    So both protons and electrons, stars and galaxies, are local inhomogeneities that pop out way after any such structure has been washed clean by an initial thermal equilbration.

    Of course that then is a constraint on the odds of the history of a universe actually repeating "particle for particle". However no one wants to talk about the real combinatorial issues here. :)
  • I am an Ecology
    Sorry fdrake, but I don't get where this is going. Your responses are vague as if you are only intent on creating some endless descent into technicalities with no finishing line. If you don't signal what you agree with, then I'm just guessing at where any useful disagreement lies.

    Do you want to have a go at summing up what you think has been revealed to be essentially wrong about my general metaphysical approach here? What would be the core disagreement in terms of orientation?

    I explained for instance that a degree of freedom is a placeholder for the brute claim to be able to measure "actions with directions". You replied, hurrah, you were right that it is a placeholder. But then didn't comment at all on the kind of placeholder I said it was.

    Then again, I specified that we find various notions of "actions with directions" being counted. Degrees of freedoms can be decomposed into various more qualitative or contextual notions, like "work", "disorder", "uncertainty". Once more, no comment whether you either agree or disagree.

    Nor will you tie anything back to my original reply to the OP - my mention of Salthe/Ulanowicz's lifecycle analysis and its applicability to political theory. A metric like ascendency tries to pick up on something even more subtle than the usual dissipative structure story.

    Degrees of freedom in this context are the reserve, the overhead, that a living system needs to keep in reserve so as to be able to adapt to perturbation. An organism (or society) can't afford to spend all its entropic "income" on here and now maximal growth. It wants a reserve of fat, a reserve of degrees of freedom, to deal with unexpected challenges.

    My point is that "degrees of freedom" is a useful generic term because it is dichotomous to "constraints", it signals "whatever is definitely countable in terms of some parameterised theory", and it is undefined enough to encompass an ever branching family of thermodynamically related thought - as in capturing this notion of a reserve of adaptive capacity. So I don't use terminology in some unthinking handwaving fashion, as has been your repeated accusation. There is a proper metaphysical structure that organises my ideas. And it is a way of looking at the issues which I learnt firsthand from folk like Salthe and Ulanowicz.

    So again, is there anything more here than you want me to break a still-developing metaphysics of a pansemiotic Cosmos down into everyday measures you can employ to do a better job of modelling some ecosystem with?

    Sum it up. What do you agree with and what is any core disagreement or vital question that remains to be tackled?
  • I am an Ecology
    Is that a zizek sweater from 2006?csalisbury

    Ed Gein was (literally) a bricoleurcsalisbury

    Glad to see you around again. And in inspired form. Zing!
  • I am an Ecology
    (Empiricism and Subjectivity); I think this is exactly the model that ought to be adopted.StreetlightX

    Why adopt the sterile old approach where one side of a dialectical relation must be “wrong” so the other can be “rightl?

    Functional systems - as in political, economic, social, ecological - are the product of their complementary tensions. Both defining aspects of their dynamics are “right” - at least to the degree that they are together in a functional balance.

    So with a social system, that is why both localised or bottom up competition is to be encouraged as much as global top down cooperation. Both need to be vigourous actions even if also then in some mutually beneficial balance.

    You lapse into an us vs them rhetorical mode without even thinking. So that leaves you unable to put something like individualism - localised competitive striving - in its appropriate context. One minute you are against neoliberal selfishness, the next pro PC pluralism. Hence your political position ends up radically confused.

    So yeah, it would be a problem if the institutional level of social order were somehow taken as the positive, the social or personal as the negative. But then the reversal of this prioritisation is just as bad.

    The trick is to see the positive aspect in both the global constraints, the social instinct towards cooperativity, and the local degrees of freedom, the social instinct that celebrates individuality, spontaneity and general striving.

    Well, I say trick. It could also be said to be the bleeding obvious.
  • I am an Ecology
    I wanted you to be technically precise with your use of terms - good that you did this.fdrake

    Great that you think that. But again, my goal is to be technically precise at the most general metaphysical or qualitative level. If you don't yet accept the validity of that, then I don't care. However, just as you insist I cash out my generality in terms of your particular paradigmatic specificity, I only wish you would make an effort to ground your demand for specificity in some more considered ontic basis in fair exchange.

    Why do I have to come all the way over to you? Why do you get to be the judge of "what's good enough" here? And don't pretend that this isn't the rhetorical trap you have sought to establish in this thread.

    I'm familiar with all those kinds of tricks, so if you truly want a deeper level of mutual engagement, you might want to reconsider. I'm not sure that you actually have that much to offer me in return. But we will see if you can eventually pull some metaphysical insights out of the bag with a high surprisal.
  • I am an Ecology
    Which possibilities are closed?fdrake

    The possibility of something else having happened. The existence of the oak is a constraint on the existence of other trees, shrubs, weeds, that might have been the case without its shade. Without the oak, those other entropifiers were possible.

    In what sense was that a mysterious statement? I'd just said "Canopy succession is an example. Once a mighty oak has grown to fill a gap, it shades out the competition."

    So excuse me for being baffled at your professed bafflements in this discussion. I mean, really?

    What degrees of freedom does this create?fdrake

    Again, you claim that I'm hand-waving and opaque, but just read the damn words and understand them in a normal fashion.

    "The mighty oak then itself becomes a stable context for a host of smaller stable niches. The crumbs off its feeding table are a rain of degrees of freedom that can be spent by the fleas that live on the fleas."

    So the oak becomes the dominant organism. And as such, it itself can be host to an ecology of species dependent on its existence. Like squirrels and jackdaws that depend on its falling acorns. Or the various specialists pests, and their own specialist parasites, that depend on the sap or tissue. Like all the leaf litter organisms that are adapted to whatever is particular to an annual rain of oak leaves.

    This is literally ecology 101. The oak trophic network is the primary school level example. You can pick away at its legitimacy with your pedantry all you like, but pay attention to the context here. This is a forum where even primary school science is a stretch for many. I'm involved in enough academic-strength discussion boards to satisfy any urge for a highly technical discussion. But the prime reason for sticking around here is to practice breaking down some really difficult ideas to the level of easy popularisation.

    It's fun, it's professionally useful, I enjoy it. I agree that mostly it fails. But again that seems more a function of context. PF is just that kind of place where there is an irrational hostility to any actual attempt to "tell it right".

    So bear in mind that I use the most simplified descriptions to get across some of the most subtle known ideas. This is not an accident or a sign of stupidity. And an expectation of failure is built in. This is just an anonymous sandbox of no account. My posts don't actually have to pass peer review. I don't have to worry about getting every tiny fact right because there are thousands ready to pounce on faint errors of emphasis as I do in my everyday working life.

    So it is fine that you want that more technical discussion. But the details of your concerns don't particularly light my fire. If you are talking about ecologies as dissipative structures, then I'm interested. If you are talking about something else, like measuring species diversity, or the difficulties of actually measuring exergy/entropy flows in ecosystems, then I really couldn't care less.

    For me. diversity just falls out of a higher level understanding of statistical attractors - https://arxiv.org/abs/0906.3507

    While actually measuring network flows is a vain dream from a metaphysical viewpoint. Of course, we might well achieve pragmatic approximations - enough for some ecological scientist to file an environmental report that ticks the legal requirement on some planning consent. But my interest is in the metaphysical arguments over why ecology is one of the "dismal sciences" - not as dismal as economics or political science, but plagued by the same inflated claims of mathematical exactness.

    What degrees of freedom does this create? How does it create rather than destroy them? How do those degrees of freedom get turned into degrees of freedom for certain organisms? Which organisms? What properties do those recipient organisms have? How do the 'degrees of freedom' in the 'crumbs' relate to the 'smaller stable niches', in what manner do they 'rain'? In what manner are they 'spent'? How does one set of degrees of freedom in the canopy become externalised as a potential for the ecosystem by its deconstruction and then re-internalised in terms of a flow diversification?fdrake

    OK. Degrees of freedom is a tricky concept as it just is abstract and ambiguous. However I did try to define it metaphysically for you. As usual, you just ignore my explanations and plough on.

    But anyway, the standard mechanical definition is that it is the number of independent parameters that define a (mechanical) configuration. So it is a count of the number of possibilities for an action in a direction. A zero-d particle in 3-space obviously has its three orthogonal or independent translational degrees of freedom, and three rotational ones. There are six directions of symmetry that could be considered energetically broken. The state of the particle can be completely specified by a constraining measurement that places it to a position in this coordinate system.

    So how do degrees of freedom relate to Shannon or Gibbs entropy, let alone exergy or non-equilibrium structure? The mechanical view just treats them as absolute boundary conditions. They are the fixed furniture of any further play of energetics or probabilities. The parameters may as well be the work of the hand of God from the mechanical point of view.

    My approach, following from Peirce, systems science, holism, organicism and other -isms stressing the four causes/immanently self-organising view, seeks to make better metaphysical sense of the situation.

    So I say degrees of freedom are emergent from the development of global constraints. And to allow that, you need the further ontic category or distinction of the vague~crisp. In the beginning, there is Peircean vagueness, firstness or indeterminism. Then ontic structure emerges as a way to dissipate ... vagueness. (So beyond the mechanical notion of entropy dissipation, I am edging towards an organic model of vagueness dissipation - ie: pansemiosis, a way off the chart speculative venture of course. :) )

    Anyway, when I talk about degrees of freedom, my own interests are always at the back of my mind. I am having to balance the everyday mechanical usage with the more liberal organic sense that I also want to convey. I agree this is likely confusing. But hey, its only the PF sandbox. No-one else takes actual metaphysics seriously.

    So organically, a degree of freedom is an action with a direction that has to emerge for some holistic or contextual good reason in the physical universe. So why these basic translational and rotational freedoms? Well Noether's theorem and relativity principles account for why actions in these two directions can never be constrained away even in a spatiotemporal system that represents a state of maximal constraint. They can't be parameterised out of existence. Quantum uncertainty has sure rammed that message home now.

    So an ontology of constraints - like for instance the many "flow network" approaches of loop quantum gravity - says that constraints encounter their own limits. Freedoms (like the Newtonian inertias) are irreducible because contraints can make reality only so simple - or only so mechanically and atomistically determined. This is in fact a theorem of network theory. All more complicated networks can be reduced to a 3-connection, but no simpler.

    So in the background of my organic metaphysics is this critical fact. Reality hovers just above nothingness with an irreducible 3D structure that represents the point where constraints can achieve no further constraint and so absolute freedoms then emerge. This is nature's most general principle. Yes, we might then cash it out with all kinds of more specific "entropy" models. But forgive me if I have little interest in the many piffling applications. My eyes are focused on the deep metaphysical generality. Why settle for anything less?

    Now back to your tedious demand that I explain ecology 101 trophic networks with sufficient technical precision to be the exact kind of description that you would choose to use - one that would pass peer review in your line of work.

    Well, again I'm thinking fuck that. I really don't care beyond the possibility that the discussion might be another little window into my bigger picture. I last talked with Ulanowicz probably 15 years ago. So it was interesting to read his recent papers and see how much he has continued on post-retirement in a rather Peircean vein like the rest of that particular crew. (Pattee was the funny one. He got grumpy and went silent for a number of years, despite being the sharpest blade. Then came back blazing as a born-again biosemiotician. The boss once more.)

    Anyhow, fill in the blanks yourself. When I talk of a rain of degrees of freedom, as I clarified previously, I'm talking of the exergy that other entropy degraders can learn how to mine in all the material that the oak so heedlessly discards or can afford to be diverted.

    The oak needs to produce sap for its own reasons. That highly exergetic matter - a concentrated goodness - then can act as a steep entropy gradient for any critters nimble enough to colonise it. Likewise, the oak produces many more acorns than required to replicate, it drops its leaves every years, it sheds the occasional limb due to inevitable accidents. It rains various forms of concentrated goodness on the fauna and flora below.

    Is exergy a degree of freedom? Is entropy a degree of freedom? Is information a degree of freedom?

    Surely by now you can work out that a degree of freedom is just the claim to be able to measure an action with a direction that is of some theoretical interest. The generality is the metaphysical claim to be able to count "something" that is a definite and atomistic action with a direction in terms of some measurement context. We then have a variety of such contexts that seem to have enough of your "validity" to be grouped under notions like "work", or "disorder", or "uncertainty".

    So "degree of freedom" is a placeholder for all atomistic measurements. I employ it to point to the very fact that this epistemic claim is being made - that the world can be measured with sufficient exactness (an exactness that can only be the case if bolstered by an equally presumptuous use of the principle of indifference).

    Then degree of freedom, in the context of ecological accounts of nature, does get particularised in its various ways. Some somewhat deluded folk might treat species counts or other superficialities as "fundamental" things to measure. But even when founding ecology more securely in a thermodynamical science, the acts of measurement that "degrees of freedoms" represent could be metaphysically understood as talking about notions of work, of disorder, of uncertainty. Ordinary language descriptions that suddenly make these different metrics seem much less formally related perhaps.

    That is the reason I also seek to bring in semiosis to fix the situation. You complain I always assimilate every discussion to semiotics. But that is just because it is the metaphysical answer to everything. It is the totalising discourse. Get used to it.

    So here, the key is the epistemic cut that can connect entropy and information. Disorder and uncertainty can be physically related in terms of rate-dependent dynamics and rate-independent information. I earlier linked to a long post which explained how this is currently being cashed out in a big way in biophysics - hence my mention of ATP as the unit of currency that puts a material scale on a cell's metabolic degrees of freedom. ATP is the concentrated goodness of "pure work". We can ground exergy at life's nanoscale, quasi-classical, intersection where a set of entropies just happen with remarkable convenience to converge.

    (Like SX, you really need to add Hoffman's Life's Ratchet to your reading list.)

    Right. I'm sure you will have a bunch of nit-picking pedantry welling up inside of you so I will leave off there. Just remember that I really am engaged in a broad metaphysical project. The correct definition of degrees of freedom is certainly a central concern as it is at the heart of the scientific method. It encodes whatever it is that we might mean by our ability to measure the "real facts" of the world. So it is at this level we can hope to discover the presumptions built into any resulting umwelt or worldview.

    You keep demanding that I cash out concepts in your deeply entrenched notions of reality. I keep replying that it is entrenched notions of reality that I seek to expose. We really are at odds. But then look around. This is a philosophy forum. Or a "philosophy" forum at least. Or a philosophy sandbox even. What it ain't is a peer review biometrics journal.
  • I am an Ecology
    Here's a book chapter arguing the same thing in a more general philosophical way.

    https://www.researchgate.net/profile/Robert_Ulanowicz/publication/292610642_Enduring_metaphysical_impatience/links/56b00ad608ae9c1968b490b7/Enduring-metaphysical-impatience.pdf?origin=publication_detail

    I like two key points. Natural systems are irreducibly complex because they feed off their own accidents. The traditional mechanical view wants to separate the formal laws constraining systems from the material accidents composing systems. But nature includes the accidental or spontaneous in the very thing of forming its laws, or constraining regularities.

    This is beautifully Peircean. The "stuff" of the world isn't some disconnected universal machinery. It's Being includes its accidents as part of what makes any laws.

    The second point is Elsasser's combinatrics argument which says that even a computable universe is very quickly an intractably computable one. Really, the Universe is always "a one off". A propensity view of statistics - as argued by Peirce and Popper - has to be fundamental.

    Put the two together and every state of the Cosmos is an instance of one-off chance - or at least as much as it is its traditional "other" of a deterministic, law-constrained, mechanism.

    Ulanowicz sums up the dichotomy of this "ordering vs the disordering" tendency in complexity thus.....

    Elsasser argued that nature is replete with one-time events - events that happen once and never occur again. Accustomed as most investigators are to regarding chance as simplistic, Elsasser's claim sounds absurd. That chance is always simple, generic, and repeatable is, after all, the foundation of probability theory.

    Elsasser, however, used combinatorics to demonstrate the overwhelming likelihood of singular events. He reckoned that the known universe consists of somewhere on the order of 10^85 simple particles. Furthermore, that universe is about 10^25 nanoseconds in age. So at the outside, a maximum of 10^110 simple events could possibly have transpired since the Big Bang.

    Any random event with a probability of less than 1 in 10^110 of recurring simply won't happen. Its chances of happening again are not simply infinitesimally small; they are hyper-infinitesimally small. They are physically unreal.

    That is all well and good, one might respond, but where is one going to find such complex chance? Those familiar with combinatorics are aware, however, that it doesn't take an enormous number of distinguishable components before the number of combinations among them grows hyper-astronomically.

    As for Elsasser's threshold, it is reached somewhere in the neighborhood of seventy-five distinct components. Chance constellations of eighty or more distinct members will not recur in thousands of lifetimes of the universe.

    Now it happens that ecologists routinely deal with ecosystems that contain well over eighty distinct populations, each of which may consist of hundreds or thousands of identifiable individual organisms. One might say, therefore, that ecology is awash in singular events, They occur everywhere, all the time, and at all scales.

    None of which is to imply that each singular event is significant. Most simply do not affect dynamics in any measurable way; otherwise, conventional science would have been impossible. A few might impact the system negatively, forcing the system to respond in some homeostatic fashion.

    A very rare few, however, might accord with the prevailing dynamics in just such a way as to prompt the system to behave very differently. These become incorporated into the material workings of the system as part of its history. The new behavior can be said to "emerge" in a radical but wholly natural way that defies explanation under conventional assumption.

    So good luck to any science based on a mechanistic metaphysics that presumes accidents are simply uncontrolled exceptions that can be hidden behind a principle of indifference. Yet also the universe does have lawful regularity. It incorporates its accidents into the habits it then forms.

    This is just a far more interesting story of the Cosmos than the usual one that pictures it as a mathematical clockwork - the one where science is reduced to a collection of measurement protocols rather than scientific measurement being the crafty art we know it to be.
  • I am an Ecology
    I think you usually handwave this by calling it 'coupling', too (forgetting that coupled systems have shared parameter spaces). Lo and behold, when you put a bit of work in, you can see literal coupling when your figurative sense of coupling was implicated and it looks like there's some way to take the intersection of parameter spaces such that each individual system's has a non-empty intersection with enough of the rest to make a connected network of flows.fdrake

    This is an example of our different interests. You presume parameter spaces can be glued together. I'm concerned with the emergent nature of parameters themselves. You have your idea of how to make workable models. I'm interested in the metaphysics used to justify the basis of the model.

    So it just gets tiresome when your criticism amounts to the fact I'm not bothered about the details of model building for various applications. I've already said my focus is on paradigm shifts within modelling. And core is the difference between mechanical and organic, or reductionist and holist, understandings of causality.

    The principle of indifference cannot be extended as equiprobability to countable or continuous state spaces - this is because a uniform distribution cannot exist on infinite sets of outcomes.fdrake

    Another inventive way to miss the point I was arguing. Sure Boltzmann had a problem if the world wasn't actually atomistic and entropy was a continuous substance, a caloric.

    Without lingering too long on that fact that that isn't actually correct, there's an observed upside down U shape in ascendency (increase then decrease) over an eutrophication gradient, though since the paper detailing that doesn't do an error analysis it's still up for debate - he has to at least engage with the relative strengths of the terms in the formula. He does.fdrake

    You can have your doubts about the robustness of his approach. But again, I was responding to the OP in terms of what ecologists actually say about ecologies based on the kind metaphysics they've actually developed.

    And both Salthe and Ulanowicz argue for a three stage lifecycle - that inverted goldilocks U-curve. Regardless of how well it may or may not be cashed out in real world models, the general metaphysical level argument seems sound enough, and obvious enough, to me. If you want to critique that, then great.

    Ulanowicz describes the motivating metaphysics in: The dual nature of ecosystem dynamics, 2009 - http://izt.ciens.ucv.ve/ecologia/Archivos/ECO_POB%202009/ECOPO7_2009/Ulanowicz%202009.pdf

    The yin and yang of ecology

    By now the reader may have noticed that two countervailing tendencies are at play in the development of any dissipative structure. In one direction a continuous stream of perturbations works to erode any existing structure and coherence. Meanwhile, this drift is opposed by the workings of autocatalytic configurations, which drive growth and development and provide repair to the system.

    This tension has been noted since Antiquity. Diogenes related that Heraclitus saw the world as a continuous tearing down and building up. With the Enlightenment, however, science opted for a more Platonic view of nature as monistic equilibrium.

    Outside of science, Hegel retained Heraclitus’ view of the fundamental tension, but with significant amendment. He noted that, although the two tendencies may be antagonistic at the level of observation, they may become mutually obligatory at the next higher level. Hegel’s view is resonant with the picture of ecosystem dynamics portrayed here.

    Indeed, the second law does dissipate what autocatalysis has built up, but it has been noted that singular chance is also necessary if systems are truly to evolve over time and develop novel emergent characteristics. Looking in the other direction, complex, evolved systems can be sustained only through copious dissipation.

    The problem with this agonistic view of the natural world is that, unlike the mechanistic (Platonic) convention, dialectic like dynamics cannot be adequately represented as algorithms.

    To repeat again, mechanistic simulation models are inadequate to the task of describing ecosystems over the longer run, because the selfsame selection exhibited by autocatalysis can unpredictably replace not only components, but their accompanying mechanisms as well. Not only does the notion of mechanism defy logic, it seems also to poorly match the dynamics that actually are at play.

    So complexity is irreducible once we start talking about self-parameterising systems. Like a dissipative structure. All your fussing about a lack of particularisation of parameter spaces by me makes no sense as we are essentially - when doing science of such systems - talking about modelling as an art. We can make better or worse choices. And any choice must be guided by some grounding, if fuzzy or vague, intuition. (Such as that entropy/information/exergy/degrees of freedom/whatever are "this kind of generic thing or process".)

    You are continually jumping to the position of there being a right way to measure the world. But it is basic to a particular group of theoretical biologists I respect - Salthe, Pattee, Rosen, Ulanowicz - that measurement is an informal act. An art or exercise of good judgement. And this is because the world of interest is inherently non-linear - it has a complexity that is irreducible.

    Ulanowicz then goes on to talk about the lifecycle model which is relevant to the OP. He makes the point that his ascendancy has this dualistic dynamic. The trade-off is between the organisational power of a system - its useful order - versus the systems overhead needed to physically instantiate that pattern of organisation.

    The chief advantage of using information theory to describe organization is that it allows one also to quantify the opposite (or complement) to information in similar fashion. Whence everything
    that is disordered, incoherent and dissipative in the same network can be captured by a related, non-negative variable called the system’s overhead...Furthermore, a system’s ascendency and overhead sum to yield its overall capacity for development.

    The actual pattern of order is the result of two opposing tendencies: In an inchoate system (one with low a), there are manifold opportunities for autocatalytic cycles to form, and those that arise create internal constraints that increase A (and thereby abet a). This tendency for a to grow via autocatalysis exists at all values of a. The role of overhead however, changes as the system progresses toward higher
    a.

    In inchoate systems (low a), it is ˚ that provides the opportunities for new cycles to form. In doing so it abets the tendency to increase autocatalysis. However, in systems that are already highly developed
    (a ≈ 1), the dominant effect of ˚ becomes the disruption of established feedback loops, resulting in a sudden loss of organized performance. (The system resets to much a lower a.)

    So at high a, ˚ strongly opposes further increase in a. Presumably, a critical balance between the countervailing roles of ˚ exists near the value of a at which the qualitative role of ˚ reverses.

    Or as the Wiki page sums it up more simply...

    Originally, it was thought that ecosystems increase uniformly in ascendency as they developed, but subsequent empirical observation has suggested that all sustainable ecosystems are confined to a narrow "window of vitality" (Ulanowicz 2002).

    Systems with relative values of ascendency plotting below the window tend to fall apart due to lack of significant internal constraints, whereas systems above the window tend to be so "brittle" that they become vulnerable to external perturbations.

    So my reply to the OP was to point out what I find to be a metaphysically reasonable account of a natural lifecycle approach to systems. It is a model developed for describing ecological systems, but both Salthe and Ulanowicz say it does extrapolate to the political and economic levels of sociological analysis.

    Thus I demonstrated that within my Peircean/organicist kit-bag of well grounded metaphysical concepts, here is a sound argument that has been already advanced.

    Then check it out and you see Ulanowicz is pretty exercised by the Hegelian logic which his approach employs. He is very concerned with the Rosen modelling relation and what it says about the informality of acts of measurement and the incommensurability of mechanistic models to irreducibly complex worlds. He employs the very same metaphysical kitset as me. (Well up to a point. Ulanowicz is a good Catholic and so we differed on the theistic slant he was working towards on the sly. :) )

    You then come along with the intent of nit-picking away, complaining that I don't follow through from a general metaphysical view to the particularity of every possible kind of entropy/information/exergy/degrees of freedom/whatever model.

    Well, like ... whatever.
  • I am an Ecology
    I’ve been talking about the thermodynamic constraints that shape dissipative systems like ecologies. You have failed to show why I should care about a species diversity index. The fact that the two don’t relate nicely is only to be expected.

    Your notion of generalising entropy talk is epistemic. My interests are ontic. But best of luck with your future endeavours.
  • I am an Ecology
    It would be interesting if Shannon biodiversity was related to ascendencyfdrake

    This is the bit that puzzles me. It seems that all your arguments want to circle back to this species diversity index. But that is an utter triviality. It says nothing about the flow of energy through a biological system, nothing about the negentropic response of a system being pushed away from its equilibrium state, nothing about anything which has to do with the global dynamics or the fluxes that are what create ecologies in the first place.

    It is pretty clear that I’m talking about dissipative structure. And so is ascendency. So is Salthe, Kay and Schneider, Bejan and the many other cites I offered. But your critique amounts to me failing to relate dissipative structure theory to some mundane measure of species diversity.
  • I am an Ecology
    I love this so much though.StreetlightX

    I see you jiggling with joy on the sidelines. SX and his man-crushes.
  • I am an Ecology
    So a commonality is that they are mappings from some space to the real line. But what matters - what determines the meaning of the entropy is both what the inputs to the entropy function are and how they are combined to produce a number. To speak of entropy in general is to let the what and the how vary with the implicit context of the conversation; it destroys the meaning of individual entropies by attempting to unify them, the unification has poor construct validity precisely because it doesn't allow the what and the how of the mapping to influence the meaning.fdrake

    And yet something still ties all this variety back to some general intuition. The usual response is "disorder".

    As I have said, at the metaphysical level, we can only approach proper definitions by way of a dialectical or dichotomistic argument. We have to identify the two complementary extremes that are mutually exclusive and jointly exhaustive. So a metaphysical-strength discussion of entropy has to follow that form. It has to be entropy as "opposed to what?". Your concern is that there seem multiple ways to quantify "entropy". My response has been that a metaphysical-strength definition would be qualitative in this precise fashion. It would lead us to a suitable dichotomy.

    Hence why I keep trying to return the conversation to entropy~information as the candidate dichotomy that has emerged in recent times. It seems a stronger statement that entropy~negentropy, or disorder~order, as mere negation is a weak kind of dichotomy, not a strong one.

    Likewise constraints~degrees of freedom slice across the debate from another direction. How the two dichotomies of entropy~information and constraints~degrees of freedom might relate is a further important question.

    So a qualitative approach here isn't just a hand-waving, anything goes, exercise in speculation. Metaphysics does have a method for clarifying its ideas about reality. And that approach involves discovering a reciprocal relation that connects two opposed limits on Being.

    As I said, information and entropy capture a duality in terms of relative surprisingness. An entropy-maximising configuration is reciprocally the least-informational. No need to count microstates even if every one is different. You only need to measure a macrostate to completely characterise the aspect of the system that is of interest.

    By contrast, a surprising state of the world is the most informational. It is high on negentropy. And now just one "microstate" is the feature that appears to characterise the whole situation - to the degree that is the aspect of interest.

    So "disorder" is just an application of the principle of indifference. A messy system is one in which the details don't matter. Gone to equilibrium, the system will be generic or typical.

    However no system can in fact maximise its entropy, reach equilibrium, except that it has stable boundary conditions. So - dichotomously - there is negentropy or order in the fact of boundary stability, in the fact of being closed for causality.

    Thus is it common to sum up entropy as about a statistical propensity to be disordered - to be in a system's most typical state. But that "first law of thermodynamics" state of closure (well, it includes the necessity of the third law to put a lower bound on things to match the first's upper bound) is only half the story. Somewhere along the line, the system had to be globally parameterised. The general holonomic constraints or boundary conditions had to form somehow.

    So here we see a good example of why dichotomies or symmetry-breakings are basic to metaphysical-strength analysis. They alert us to the other half of the holistic story that reductionists are wont to overlook. A full story of entropy can't just presume stable boundary conditions. Those too must be part of what develops (along with the degrees of freedom that global constraints produce, or parameterise).

    To the degree your discussions here are not dialectically explicit, they simply fail the test of being adequately metaphysical. Your obsession about quantitative methods is blinding you to the qualitative discussion where the hunt is on for the right way to frame information~entropy, or degrees of freedom~constraints, as the fundamental dichotomy of a developmental Cosmos.
  • I am an Ecology
    Maybe I'd be more comfortable with what you're saying if you used scarequotes like I do.fdrake

    Your comfort is definitely my number one priority. I mean "number one priority".
  • I am an Ecology
    I'm not suggesting you 'look at the formulas and find a master one', the thing I cared about was that the measures of entropy in terms of ascendency and relative abundance meant different things - they summarise different aspects of the behaviour of the ecosystem.fdrake

    What else do you expect if you take the attitude that we are free to construct metrics which are valid in terms of our own particular interests?

    I agree we can do just that. We can describe the world in terms that pick up on some characteristic of interest. I just say that is not a deep approach. What we really want is to discover the patterns by which nature organises itself. And to do that, we need some notion about what nature actually desires.

    This is where thermodynamics comes in. This is what is unifying science at a foundational level now. Both biology and physics are reflecting that emergent metaphysical project. And thermodynamics itself is becoming semiotic in recognising the duality of entropy and information.

    So you are down among the weeds. I'm talking about the big picture. Again, it is fine if your own interests are narrow. But I've made different choices.
  • I am an Ecology
    What does this measure? The diversity of flows within a network. How? It looks at the proportion of each flow in the total, then computes a quantification of how that particular flow incorporates information from other flows - then scales back to the total flow in the system. It means that the diversity is influenced not just by the number of flows, but their relative strength. For example, having a network that consisted of 1 huge flow and the rest are negligible would give an ascendency much closer to a single flow network than another measure - incorporating an idea of functional diversity as well as numerical biodiversity. Having 1 incredibly dominating flow means 0 functional diversity.fdrake

    You seem terribly concerned by things that don't seem a big issue from the thermodynamic point of view.

    A lot of your focus seems to be on how to we specify richness or complexity or healthy biodiversity. And you instinctively think in terms of counting niches or something else "structually concrete".

    But the flow view of an ecosystem would see a hierarchy of flow just happening naturally. You would get complexity arising in a way that is essentially "meaningless".

    So think of a scalefree network or other fractal growth model. A powerlaw hierarchy of connectivity will just arise "randomly". It doesn't need evolutionary semiosis or natural selection to create it. The complexity of the flow is not something that needs a designing hand to happen. It is the natural structure of the flow.

    Check out Adrian Bejan's constructal law. He is pretty strong on this issue.

    So a reductionist would think of a richly organised hierarchy of relations as a surprising and delicate state of affairs. But the switch is now to see this as the inevitable and robust equilibrium state of a freely growing dissipative structure, like an ecosystem. Scalefree order comes for free.

    So something like the reason for the occurrence of niches over all scales is not something in need of some "contextualised" metric to index it. We don't have to find external information - some historic accident - which specifies the fact of a hierarchical order. That kind of order is already a natural pattern or attractor. It would take external accidents of history to push it away from this natural balance.

    Where every pipi is the proportion of the i-th species of the total. This is a numerical comparison of the relative abundance of each species present in the ecosystem. This obtains a maximum value when each species has equal relative abundance, and is then equal to the number of species in the ecosystem. Look at the case with 2 species each having 2 animals. p is constant along i, being 0.5, then the Shannon Biodiversity is -2*0.5*log(0.5) = log2, so its exponential is 2.fdrake

    So here, aren't you assuming that we can just count species and have no need to consider the scale of action they might represent? One might be a bacterium, the other an elephant. Both might be matched in overall trophic throughput. We would expect their relative abundance to directly reflect that fact rather than a species count having much useful to say about an ecosystem's healthy biodiversity or state of entropic balance.

    Of course, if you've read this far, you will say 'the middle state is the one furthest from order so of course it has the highest degrees of freedom', which suggests the opposite intuition from removal of dominant energy flows 'raining degrees of freedom' down onto the system. This just supports the idea that your notion of entropy has poor construct validity.fdrake

    Exergy? I mean you seemed to agree that it is about quality of the entropy. So a big tree dropping leaf litter is a rather different story to a forest clearing being blasted by direct sunlight again.
  • I am an Ecology
    This is why what we're talking about has almost no relation to the OP.fdrake

    But the OP was about extracting a political analogy from a lifecycle understanding of ecosystems. I simply responded by saying Salthe's infodynamic perspective gives you a self-explanatory three stage take on that.

    You then got shirty about my use of infodynamic or dissipative theory jargon. I'm happy to explain my use of any terminology. And I'm happy to defend the fact that I do indeed have an over-arching worldview. That is way more than nearly anyone else does around these here parts.
  • I am an Ecology
    Going from ATP being used to fuel an organism straight to a 'global' sense of infodynamics and signals/signs in pansemiosis. It works only when you wave your hands and don't focus on the specifics. When what before was concrete becomes metaphorical, then what was metaphorical becomes concrete.fdrake

    Its not metaphorical if infodynamics/semiosis is generalisable to the material dissipative structures in general.

    Again, you might have to actually read the literature - Salthe for instance. But the metaphysical ambition is clear enough.

    The information is separate from the dynamics via the epistemic cut in biosemiotic systems - organisms that are living and mindful. A organisation mediated by signs is perfectly concrete.

    What still counts as speculative is then generalising that concrete description of life/mind so that it is seen to be a concrete theory of the Cosmos. The Universe would be understood as a dissipative system organised by a sign relation. The biological understanding of the duality of information and entropy would prove to apply to the whole of existence as its scientific theory.

    So it is not me waving my hands. It is you demonstrating a deaf ear to context. I am careful to distinguish between the part of what I say which is "normal science" and the part that is "speculative metaphysics". And the speculative part is not merely metaphor because the project would be to cash it out as concrete theory, capable of prediction and measurement.

    I agree that may also be a tall order. But still, it is the metaphysical project that interests me. The fact that you repeatedly make these ad hom criticisms shows that you simply wish not to be moved out of your own particular comfort zone. You don't want to be forced to actually have to think.
  • I am an Ecology
    Shannon's strictly broader than Boltzmann since it allows for non-equidistribution.fdrake

    Does that remain the case now that information theory has been tied to the actual world via holographic theory?

    Boltzmann's k turned out to be physically derived from the dimensionless constants of the Planck scale. And Shannon likewise now represents a fundamental Planckian limit. The two are united via the basic physical limits that encode the Cosmos.

    The volume of a spacetime defines some entropic content. The surface area of that volume represents that content as information. And there is a duality or reciprocality in the relation. There can't be more entropy inside than there are questions or uncertainties that can be defined on a 4 to 1 surface area measure.

    It is about the biggest result of the last 30 years in fundamental physics.
  • I am an Ecology
    The theoretical links between Shannon's original entropy, thermodynamical entropy, representational complexity can promote a vast deluge of 'i can see through time' like moments when you discover or grok things about their relation. BUT, and this is the major point of my post:

    Playing fast and loose with what goes into each of the entropies and their context makes you lose a lot. They only mean the same things when they're indexed to the same context. The same applies for degrees of freedom.

    I think this is why most of the discussions I've read including you as a major contributor are attempts to square things with your metaphysical system, but described in abstract rather than instantiated terms.
    fdrake

    I'm baffled that you say Shannon entropy came before Boltzmann's entropy.

    But anyway, again my interest is to generalise across the different contextual instantiations of the measurement habits which science might employ. I am indeed interested in what they could have in common. So I don't need to defend that as if it were some problem.

    And as I have pointed out, when it comes to semiosis and its application to the world, we can see that there is a whole level of irreducible complexity that the standard reductionist approach to constructing indices of information/entropy/degrees of freedom just misses out.

    It is fine that science does create simpler indexes. I've no problem with that as a natural pragmatic strategy. But also, with Shannon and Boltzmann, it became clear that informational uncertainty (or configurational degrees of freedom) and entropic material degrees of freedom (or countable microstates) are two sides of the same coin. The mathematics does unite them in a general way at a more abstract level.

    And then when it come to biosemiosis, information and entropy become two sides of a mechanically engineered epistemic cut. We are talking about something at a level above the brute physical realm imagined by the physical discourse that gives us Shannon uncertainty and Boltzmann entropy. It thus needs its own suitable system of measurement.

    That is the work in progress I see in literature. That is the particular story I am tracking here.

    You can keep re-stating that a proper scientist would use the proper tools. You can reel off the many kinds of metrics that reflect the simpler ontology of the reductionist. You can continue to imply that I am somehow being unscholarly in seeking to consider the whole issue at a more holistic level - one that can encompass physicalist phenomena like life and mind. And indeed, even culture, politics, economics, morality and aesthetics.

    But I know what I'm about so I'm only going to respond to your critique to the degree it throws light on the connecting commonality, the linkages to that more holistic worldview.
  • I am an Ecology
    The Shannon Entropy is related to the Boltzmann entropy in thermodynamics in a few ways I don't understand very well.fdrake

    What's wrong with a reciprocal relation? If Shannon entropy is the degree of surprise to be found in some system, then the Boltzmann entropy is the degree to which that system is in its least surprising state.

    So if a system is constrained and is thus composed of some set of independent elements, states, or events, its arrangement can be described somewhere on a spectrum between maximally surprising and minimally surprising. An unsurprising arrangement requires the least amount of information to specify it. And it thus represents the most entropic arrangement.
  • I am an Ecology
    Up to you. It's a nice day outside. I doubt I will tick off every point you raised.
  • I am an Ecology
    What does 'dichotomous to constraints' mean?

    There are lots of different manifestations of the degrees of freedom concept. I generally think of it as the dimension of a vector space - maybe calling a vector space an 'array of states' is enough to suggest the right meaning. If you take all the vectors in the plane, you have a 2 dimensional vector space. If you constrain the vectors to be such that their sum is specified, you lose a degree of freedom, and you have a 1 dimensional vector space. This also applies without much modification to random variables and random vectors, only the vector spaces are defined in terms of random variables instead of numbers.
    fdrake

    In mechanics, degrees of freedom are a count of the number of independent parameters needed to define the configuration of a system. So your understanding is correct.

    And they are dichotomous to constraints as they are what are left over as a result of a configuration being thus limited. Constraint suppresses freedoms. What constraint doesn't suppress then remains to become some countable degree of freedom for that system.

    Then from an infodynamic or pansemiotic point of view, constraints become the informational part of the equation, degrees of freedom are the dynamics. In the real material world, the configuration can be treated as the knowledge, the structure, that the organismic system seeks to impose on its world. The constraints are imposed by a mind with a purpose and a design. The degrees of freedom are then the entropy, the dynamics, that flow through the organism.

    So a structure has to be imposed on the flow to in fact create a flow composed of some set of degrees of freedoms. A bath of hot water will simply cool by using its surrounds as a sink. An organism wants to build a machinery that stands inbetween such a source and sink so as to extract work along the way.

    That is why I suggest ATP as a good way to count degrees of freedom in biology. It is the cell's meaningful unit of currency. It places a standard cost on every kind of work. It defines the dynamical actions of which a cell is composed in a way that connects the informational to the entropic aspects of life. An ATP molecule could be spent for any purpose. So that is a real non-physical freedom the cell has built for itself.

    An ATP molecule can be used to make a kinesin "walker" transport molecule take another step, or spin the spindle on ATP-ase. But then the spending of that ATP has an actual entropic cost as well. It does get used up and turned into waste heat (after the work is done).

    So degrees of freedom are what constraints produce. And in living organisms, they are about the actions that produce units of work. The constraints are then the informational structure that regulates the flow of material entropy, channelling some source to a sink in a way that it spins the wheels of a cellular economy along the way.

    A cooling bath of hot water lacks any interesting informational structure apart from perhaps some self-organised convection currents. Like a Benard cell, it might have its thermodynamic flow improved by emergent constraints producing the organised currents that are now some countable set of degrees of freedom. A more chaotic path from hot to cold has had its own vaguer collection of degrees of freedom suppressed so the flow is optimised by a global structure.

    But life has genes, membranes, pores, switches, and a host of molecular machinery that can represent the remembered habits of life - some negentropic or informational content - that produces a quite intentional structure of constraints, a deliberately organised set of degrees of freedom, designed to extract self-sustaining work from any available entropy gradient.

    So I suppose I should talk about configurational entropy.fdrake

    Yep. But note that biosemiosis is about how life has the memory to be in control of its physical configuration. It uses a potential gradient to do the work of constructing itself.

    So that brings in the informational aspect of the deal - Pattee's epistemic cut. The organism first insulates itself from dynamics/entropy by creating its own informational degrees of freedom. It does this by using a genetic code. But also, it does it foundationally down at the level of the dynamics itself in having "a unit of work" in an ATP molecule that can be used "to do anything a cell might want".

    What gets configured is not just some spatial geometry or thermal landscape. The material world is actually being inscribed by an organism's desires. The dynamics is not merely self-organising. It is being organised by a proper self.

    It is this situation which your notions of degrees of freedom don't really cover. You are not accounting for the epistemic cut which is the added semiotic feature of this material world now. If you are going to demand quantitative measures, the measures have to span the epistemic cut in some fashion. You are trying to make measurements that only deal with one of the sides of the equation.
  • I am an Ecology
    Yeah. But just have a go. Let's see what you could come up with. It truly might help to make sense of your attacks on mine.

    If instead you really want to say that entropy is simply whatever act of measurement we care to construct as its instrumental definition - that there is no common thread of thought which justifies the construct - then how could you even begin to have an intelligent discussion with me here?
  • I am an Ecology
    Entropy is absolutely well defined.fdrake

    What's your single sentence definition then? I mean, just for fun.
  • I am an Ecology
    So already we agree that the notion is ill-defined? It is a fast and loose term in fact. Just like entropy. Or information. Maybe this is why I am right in my attempt to be clear about the high-level qualitative definition and not pretend it has some fixed low-level quantitative measure.

    But I'll keep waiting until you do connect with what I've already posted.
  • I am an Ecology
    Hmm. Just not convincingly butch coming from you. And more importantly it has no sting. You've got to be able to find a real weakness to pick at here. Calling me buttercup once again ends up saying more about your life experience than mine.
  • I am an Ecology
    Sousing? You really do have a tin ear when it comes to your ad homs. It absolutely spoils the effect when you come across as the hyperventilating class nerd.
  • I am an Ecology
    Why the sudden interest in Nick Lane and Peter Hoffmann? Couldn't possibly be anything I said.
  • I am an Ecology
    I'll find time to respond to your post later. But it is a shame that you bypass the content of my posts to jump straight back to the world from your point of view.

    You make very little effort to engage with my qualitative argument. Well none at all. So it feels as though I'm wasting my breath if you won't spell out what you might object to and thus show if there is any proper metaphysics motivating your view, or whether you just want to win by arguing me into some standard textbook position on the various familiar approaches to measuring entropy.

    Perhaps I'll let you finish first.
  • Psychological Responses to Landscapes
    However, that is not my experience, and I would challenge the idea that a city-dweller who has never seen or heard of the mountains would experience no unusual psychological response from teleporting to the top of Mont Blanc on a clear summer day.TJO

    I'd say from experience it goes both ways. These days I live in a small city surrounded by awesome mountains. So taking a trip back to one of the world's big cities is pretty awesome as a contrast.

    Psychologically, I think you are only talking about the sense of arousal we get from something "high contrast", something that is out of scale with the familiar. The arousal can be read as frightening or exhilarating depending on our mindset. It depends upon whether we are judging the situation as something to approach or avoid.

    So there is no necessity that the top of a mountain, or the busy centre of London, be either frightening or awesome. That bit of the feeling is down to some further judgement. But if the environment has a high contrast with your familiar environment, it is going to be arousing in some way. You will be feeling like wanting to make some sense of its novelty.

    Another quick point about the aesthetics of nature. I would argue we are also aesthetically tuned to the recognition of symmetry. We like highly regular shapes that approach some ideal limit. But that explains the appeal of cubes and spheres, not the roughness and jaggedness of your typical spectacular landscape.

    However a spectacular landscape does have a perfect fractal symmetry. It has an ideal balance in its self-similarity or scale-free shape. So a tree or fern is lovely because it has a self-similar branching structure - completely regular in the irregular way it forks. Same with river networks or eroding mountains and coastlines.

    We can of course appreciate this natural symmetry in a fern or tree quite easily. But to see the fractal nature of a landscape, we do have to have a big vista. We have to step back far enough to see nature over a lot of the scales all at once. Hence another reason why mountain climbing is an aesthetic experience.
  • I am an Ecology
    First up, I'm not bothered if my arguments are merely qualitative in your eyes. I am only "merely" doing metaphysics in the first place. So a lot of the time, my concern is about what the usual rush to quantification is missing. I'm not looking to add to science's reductionist kitset of simple models. I'm looking to highlight the backdrop holistic metaphysics that those kinds of models are usually collapsing.

    And then a lot of your questions seem to revolve around your definition of degrees of freedom vs mine. It would be helpful if you explained what your definition actually is.

    My definition is a metaphysically general one. So it is a little fuzzy, or broad, as you say.

    To help you understand, I define degrees of freedom as dichotomous to constraints. So this is a systems science or hierarchy theory definition. I make the point that degrees of freedom are contextual. They are the definite directions of action that still remain for a system after the constraints of that system have suppressed or subtracted away all other possibilities.

    So the normal reductionist metaphysical position is that degrees of freedom are just brute atomistic facts of some kind. But I seek to explain their existence. They are the definite possibilities for "actions in directions" that are left after constraints have had their effect. So degrees of freedom are local elements shaped by some global context, some backdrop history of a system's development.

    Thus I have an actual metaphysical theory about degrees of freedom. Or rather, I think this to be the way that holists and hierarchy theorists think about them generally. Peirce would be the philosopher who really got it with his triadic system of semiosis. Degrees of freedom equate to his Secondness.

    A second distinctive point is that I also follow semiotic thinkers in recognising an essential connection between Boltzmann entropy and Shannon uncertainty - the infodynamic view which Salthe expresses so well. So this is now a quantification of the qualitative argument I just gave. Now biosemiotics is moving towards the possibility of actual science.

    Theoretical biologists and hierarchy theorists like Howard Pattee in particular have already created a general systems understanding of the mechanism by which life uses codes to harness entropy gradients. So the story of how information and dynamics relates via an "epistemic cut" has been around since the 1970s. It is the qualitative picture that led to evo-devo. And it is the devo aspect - the Prigogine-inspired self-organising story of dissipative structures - that has become cashed out in an abundance of quantitative models over the past 30 years. I assume you know all about dissipative structure theory.

    So what we have is a view of life and mind that now is becoming firmly rooted in thermodynamics. Plus the "trick" that is semiotics, or the modelling relation.

    The physico-chemical realm already wants to self-organise to dissipate energy flows more effectively. That in itself has been a small revolution in physical science. What you call configuration entropy would seem to be what I would call negentropy, or the degrees of freedom spent to create flow channelling structure - some system of constraints. And in the infodynamic (or pansemiotic) view, the negentropy is information. It is a habit of interpretance, to use Peirce's lingo. So we have the duality of entropy and information, or a sustaining flow of degrees of freedom and set of structuring constraints, at the heart of our most general thermodynamical description of nature.

    Reductionist thinking usually just wants to talk about degrees of freedom and ignore the issue of how boundary conditions arise. The thermodynamics is basically already dead, gone to equilibrium, by the time anything is quantified. So the boundary conditions are taken as a given, not themselves emergently developed. For example, an ideal gas is contained in a rigid flask and sitting in a constant heat sink. Nothing can change or evolve in regard to the constraints that define the setting in which some bunch of non-interacting particles are free to blunder about like Newtonian billiard balls. But the dissipative structure view is all about how constraints can spontaneously self-organise. Order gets paid for if it is more effective at lowering the temperature of a system.

    So thermodynamics itself is moving towards an entropy+information metaphysics. The mental shift I argue for is to see dissipative structure as not just merely a curiosity or exception to the rule, but instead the basic ontological story. As Layzer argues, the whole Big Bang universe is best understood as a dissipative structure. It is the "gone to equilibrium" Boltzmann statistical mechanics, the ideal gas story, that is the outlier so far as the real physical world is concerned. The focus of thermodynamics has to shift to one which sees the whole of a system developing. Just talking about the already developed system - the system that has ceased to change - is to miss what is actually core.

    So physics itself is entropy+information in some deep way it is now exploring. And then biology is zeroing in on the actual semiotic machinery that both separates and connects the two to create the even more complex phenomenon of life and mind. So now we are talking about the epistemic cut, the creation of codes that symbolise information, capture it and remember it, so as to be able to construct the constraints needed to channel entropy flows. Rivers just carve channels in landscapes. Organisms can build paths using captured and internalised information.

    Only recently, I believe the biosemiotic approach has made another huge step towards a quantitative understanding - one which I explained in detail here: https://thephilosophyforum.com/discussion/comment/105999#Post_105999

    So just as physics has focused on the Planck-scale as the way to unify entropy+information - find the one coin that measures both at a fundamental level - so biology might also have its own natural fundamental scale at the quasi-classical nanoscale (in a watery world). If you want to know what a biological degree of freedom looks like, it comes down to the unit of work that an ATP molecule can achieve as part of a cell's structural machinery.

    To sum up, no doubt we have vastly different interests. You seem to be concerned with adding useful modelling tools to your reductionist kitbag. And so you view everything I might say through that lens.

    But my motivation is far more general. I am interested in the qualitative arguments with which holism takes on reductionism. I am interested in the metaphysics that grounds the science. And where I seek to make contact with the quantitative is on the very issue of what counts as a proper act of measurement.

    So yes, I am happy to talk loosely about degrees of freedom. It is a familiar enough term. And then I would define it more precisely in the spirit of systems science. I would point to how a local degree of freedom is contextually formed and so dichotomous to its "other" of some set of global constraints. Then further, I would point to the critical duality which now connects entropy and information as the two views of "a degree of freedom". So that step then brings life and its epistemic cut, its coding machinery, into the thermodynamics-based picture.

    And then now I would highlight how biophysics is getting down to the business of cashing out the notion of a proper biological degree of freedom in some fundamental quantitative way. An ATP molecule as the cell's universal currency of work looks a good bet.

    I'm sure you can already see in a hand-waving way how we might understand a rainforest's exergy in terms of the number of ATP molecules it can charge up per solar day. A mature forest would extract ATP even from the tiniest crumbs dropping off the table. A weedy forest clearing would not have the same digestive efficiency.

    So I've tried to answer your questions carefully and plainly even though your questions were not particularly well posed. I hope you can respond in kind. And especially, accept that I just might not have the same research goals as you. To the degree my accounts are metaphysical and qualitative, I'm absolutely fine about that.
  • I am an Ecology
    Your questions seem off the point so I’m struggling to know what you actually want.

    If you have a professional interest, then there is a big literature. Maybe start with https://www.jameskay.ca/about/thermo.html

    Rod Dewar and Rod Swenson also. I’ve mention Stan Salthe and Robert Ulanowicz. Charlie Lineweaver is another. Adrian Bejan might be the strongest in terms of generic models.

    I’ve not been close to the research for 20 years and I was always only really interested in the qualitative arguments. Also the quantitative support wasn’t exactly slam dunk. Measuring ecosystems is not easy.

    But for instance, one line of research involved thermal imaging of rainforests and other ecosystems. The hypothesis was that more complex ecologies would stick out by having a cooler surface temperature. They would extract more work from the solar gradient.

    Is that the kind of experiment you have in mind?

    Here’s a presentation with references at the end as well as charts of data - https://hyspiri.jpl.nasa.gov/downloads/2011_Symposium/day1/luvall%20hyspiri%20ecological%20thermodynamics%20may%202011%20final.pdf
  • I am an Ecology
    What do you mean? Either we do blow ourselves up, or we do find a long-run ecological balance.

    Well, I was just trying to cheer you up. I realise there is in fact a third option where human ingenuity does get used to keep the game going in ever more extravagant fashion. Rather than changing ourselves to fit nature, many people will quite happily go along with changing nature to fit us.

    This is the anthropocene. Once we have artificial meat, 3D printed vegetables made from powdered seaweed, an AI labour force and nuclear fusion, who cares about rain forests and coral reefs? Rent your self some VR goggles and live out of that old time stuff if you are sentimental. Meanwhile here is an immersive game universe where you can go hunting centaurs and unicorns.

    So probably bad luck. We likely have enough informational degrees of freedom to beat nature at its own game.
  • Level III Multiverse again.
    I am basing a lot of my claims on another thread debating why 0.999... is 1, not just infinitesimally close to it. It was explained by someone who knows their stuff far better than I.noAxioms

    Hah. There certainly is an official position on this. But it is more about what has to be agreed to make the maths come out right than one based on force of metaphysical argument.

    And I'm not complaining. Maths needs to secure its constructs. It needs to be axiomatic.

    I'm just reminding that this is what happens and so maths isn't in a position to tell metaphysics "what is really going on" due to what if finds works. Maths can act as a powerful constraint on free metaphysical speculation, and also serve as a powerful inspiration to further inquiry. But it isn't how metaphysical truth is discovered.

    So the infinite and the infinitesimal speak to the need to establish limits. Constructive actions like counting need to have constraints to bound them as well. It is in fact the same dialectical issue which is at the root of metaphysical reasoning. For something to be the change, something complementary must be made the bit that stands still.

    The number line has to be both continuous yet discrete at the same time. It must be a line composed of points. So of course some fancy mathematical machinery must be added to negotiate what must be a tricky change-over going on somewhere. What connects the points? What permits an exact cut?

    The official answer works. But it is also pervaded by a spirit of "OK guys, shut your eyes for a moment, don't ask any annoying questions, as we do this bit of tricky surgery".