• waarala
    97
    Not really. If you have a passage you're thinking specifically about, please direct me to it.

    Heidegger often says that time, "temporality," is the horizon for any understanding of being. That's a difficult sentence to get your mind around, but since we're essentially caring, temporal beings (human beings), and we have an understanding of being, it is only through temporality that something like "being" can be understood.
    Xtrix

    This is a good point. I should have been more careful. According to B&T we a r e care or concern. That is, we a r e our significative relations (primarily our practical life). Certain kind of "objects" emerge from here. And this kind of being has as its sense (Sinn) the temporality. Our care-being means temporal being. Heidegger didn't explore in B&T what is being as such. It seems that our understanding of being as presence stems from a certain kind of care.

    I don't quite understand what you're getting at here. How does the second sentence relate to the first? And what does the second mean?Xtrix

    I am just trying to understand what H. means (in B&T) with the "authentic existence" and how it relates to History (Geschichte, not historie as science) as H. understands it. What kind of "authentic" mode of presence there could be? And how this relates to truth as unconcealedness (as opposed to correspondence)? How the Husserlian "stretched" presence (now is not a point) relates to this? There something relatively enduring is "mixed" with something already gone (past) and something not yet present. How Heidegger conceives "identity in difference"? Or continuity? Problems of ideality, substance etc??

    ---

    And how Husserlian (and Heideggerian?) point of view differs from Aristotle's is that being and becoming is conceived as something belonging to appearance, to intentional experience. They are not purely ontological concepts (without any kind of subjectivity). For Husserl (and for Heidegger?) ontology has always phenomenology as a "correlative" counterpart. And which entails transcendental subject (Husserl) or Dasein/Existence (Heidegger). Aristotelian ontology is more like "realistic ontology" (which the rational soul has constructed?)
  • Mikie
    6.2k
    Drama queen, you make me ill.neonspectraltoast

    This has likewise been reported, and will (like your other posts) undoubtedly be deleted by the moderators.

    But keep going until you're kicked off the forum, by all means.
  • Mikie
    6.2k
    It seems that our understanding of being as presence stems from a certain kind of care.waarala

    Right, in this case a "falling," I suppose, which later gets interpreted over as an aspect (or "ecstasy" of) temporality.

    I think for H. it is a question about some enduring whole amidst the change. That is, if there shall be Dasein and its truth.
    — waarala

    I don't quite understand what you're getting at here. How does the second sentence relate to the first? And what does the second mean?
    Xtrix

    I am just trying to understand what H. means (in B&T) with the "authentic existence" and how it relates to History (Geschichte, not historie as science) as H. understands it.waarala

    It's difficult to know what Heidegger means by "authenticity," but from my reading it means something like not only owning your "self" as a unique individual (as opposed to the herd mentality and conformity of the collective "they" of which we are necessarily a part of), but owning and taking responsibility for your own being, in a way -- to see that there's no grounding and that you can decide any way you like what you are, you can define yourself in any way you'd like. You see this all over when people try defining a human being, "human nature," as a "creature of God," as "rational animal," and even further as a "mind" (thinking thing,res cogitans), as a "subject," as a "self" or even "I".

    Authenticity is recognizing your being, your very existence, your ontological nature, not simply your ontical nature as present-at-hand "fact," "material," "substance," "energy," or even "spirit" (since the soul is a kind of "substance" -- an entity, a being). To see that first and foremost, you're a being -- you exist (This is a reversal of Descartes, since we're saying here essentially "Sum, ergo cogito" -- and that there's no "right" way to think about, interpret, categorize, or speak about it. Being (including our being) is a kind of "nullity."

    In other words, you can interpret and describe "being" (your own and the "outside world" as well -- being in general apart from your particular being) in essentially any way you want, and this groundlessness is anxiety-provoking, but if you "flee" it, "flee into the crowd," into conformity (so to speak), never face up or allow yourself to feel this "existential anxiety," you're inauthentic.

    All this implies a value-judgment, but I don't mean it to. There's nothing "bad" about one and "good" about the other -- they're simply different ways of living.
  • VagabondSpectre
    1.9k
    Reasoning speculating about nature in the field of science and physics is done only to look for insight and clues that can lead to deeper discovery, otherwise they're putting the horse before the cart.

    Science is entirely based on the empirical validity of induction. That is to say, experimental consistency with respect to prediction is the actual driver of scientific knowledge. The cart itself being hauled can be thought of as a static heuristic or set of useful models, and the horse that's actually doing the pulling can be thought of as the act of scientific exploration (conducting research, testing hypothesis, etc...).

    Its all built around induction. If we keep getting the same accurate results when we actually test the predictions that scientific models imply, we increase our surety that the model is accurate. Another way of saying this is that we increase our confidence that applying the theory/model correctly will yield the expected results.
  • Mikie
    6.2k
    speculating about nature in the field of science and physics is done only to look for insight and clues that can lead to deeper discovery, otherwise they're putting the horse before the cart.VagabondSpectre

    What do you mean by "speculating about nature"?

    Science is entirely based on the empirical validity of induction.VagabondSpectre

    Says who?

    That is to say, experimental consistency with respect to prediction is the actual driver of scientific knowledge.VagabondSpectre

    You present this as if you've stumbled on the true definition of science. But in reality, it's not at all clear what drives scientific knowledge -- especially if we don't know what science really is.

    You seem to be responding to my initial post -- but the rest of what you've written has almost nothing to do with it. I'm interested in the ontology of what's called "science," which seems to me to be bound up with a conception of nature. Thus I track the idea through history, to the Greeks and the word phusis (translated into Latin as natura and the root of "physics") -- which is in my title: φθσισ. The point is to explore this ancient Greek sense of phusis, as this was their word for being, and to see how it differs from our modern conception of being in science ("nature," the "cosmos," etc).

    Talking about the inductive method isn't relevant here.
  • VagabondSpectre
    1.9k
    What do you mean by "speculating about nature"?Xtrix

    Imagining the way the world could be, could work, has been, or will be, without conducting a single experiment to validate those imaginations.

    Says who?Xtrix

    It's not an absolute rule, but things which have no consistent empirical validation tend to get systematically expelled from the body of scientific knowledge.

    You present this as if you've stumbled on the true definition of science. But in reality, it's not at all clear what drives scientific knowledge -- especially if we don't know what science really is.Xtrix

    But we *do* know what science is (it's a body of concepts and models with sufficient experimental predictive power). We even know what it really is (induction via empiricism). You're free to suppose science as continuous and emergent thing, tracing roots through ancient times (and ancient fallacies), but its evolution is much more discretized than that. Before the notion experimental validation is how we should test scientific models really took hold, "science" had a hard time advancing. Shaking off the dead weight of superstitiously derived ontic assumptions has often made the difference.

    You seem to be responding to my initial post -- but the rest of what you've written has almost nothing to do with it. I'm interested in the ontology of what's called "science," which seems to me to be bound up with a conception of nature. Thus I track the idea through history, to the Greeks and the word phusis (translated into Latin as natura and the root of "physics") -- which is in my title: φθσισ. The point is to explore this ancient Greek sense of phusis, as this was their word for being, and to see how it differs from our modern conception of being in science ("nature," the "cosmos," etc).

    Talking about the inductive method isn't relevant here.
    Xtrix

    I have given you a compressed definition of what "nature" means in terms of modern science. Nature is the way things are as revealed by controlled and repeated experimentation and testing (consistent observations and predictions); nature is the destination and/or the cargo the cart is seeking, not the force driving its advancement. "Nature is my god" is not a serious or guiding attitude, other than to say the success of science relies on the universe actually containing consistent behavior that can be modeled, and all claims must be confirmed or disconfirmed with physical evidence/repeatable experimentation. If you want to contrast this with ancient opinion, rather than supposing gods may have whimsical or changing designs that need fancy and snow-flaked interpretations, we suppose that god doesn't roll dice.

    Importantly, nature is the thing science is attempting to model; it cannot reason from nature or appeal to nature (the naturalistic fallacy). Speculating about the nature of things (meaning to say, making untested or unstable assumptions) is one of the cardinal differences between a primitive and error prone ontology like Aristotelian teleology (haphazardly assigning qualities, functions, purposes, etc...) and the modern scientific method. Aside from that, modern science eschews "meaning" and "why" in-favor for "observable cause", so on some level the comparison between ancient (and rather superstitious) ontologies and the ontology of modern science is apples to oranges. Ancient systems tend to include meaning or ethical components which are completely invisible to the lens of modern science.

    The move from philosophical speculation to more strict empiricism is why modern science actually got somewhere.

    Re-responding to the final paragraph of your post with this in mind

    The analysis of this concept is very important indeed to understand our current scientific conception of the world, and therefore the predominant world ontology (at least non-religious, or perhaps simply the de facto ontology ). Does anyone here have an analysis to share, original or otherwise? Full disclosure: I am particularly struck by Heidegger's take, especially in his Introduction to Metaphysics. But other analyses are certainly welcome.Xtrix

    It's not necessary to understand Aristotle's teleology or Descartes doubt to understand modern science; science is necessary to understand science. I don't say this facetiously, it's the very crux of science itself: make no starting assumptions about what something is or the way things are, and instead use observation, experience,and systematic modeling/experimentation to gain predictive and therefore descriptive power. It's less about having a world-view colored by historical conceptions so much as it is testing presumptive worldviews.

    Predictive power is ultimately the only signal of truth that we have. Comparing this to the sciences of old, much of it is comforting self-delusion and window-dressing derived to fit metaphysical prior assumptions.
  • Mikie
    6.2k
    Imagining the way the world could be, could work, has been, or will be, without conducting a single experiment to validate those imaginations.VagabondSpectre

    Galileo didn't conduct any experiments besides thought experiments. Aristarchus figured out the approximate circumference of the Earth with basic mathematical reasoning. Many of Einstein's ideas were also based in thought experiments.

    But we *do* know what science is (it's a body of concepts and models with sufficient experimental predictive power). We even know what it really is (induction via empiricism).VagabondSpectre

    No, we don't. It's just not so simple, otherwise there wouldn't be work in the philosophy of science. The notion of "science," and the notion of a scientific "method," has a very long and interesting history, and sometimes it fits what goes on -- but often times it doesn't. To say that it's just a matter of empirical observation and experimentation does little good -- that's natural philosophy, too. The Greeks were doing that as well. Is archeology not a science because it doesn't have "sufficient experimental predictive power"? What about genetics or evolutionary biology?

    You're free to suppose science as continuous and emergent thing, tracing roots through ancient times (and ancient fallacies), but its evolution is much more discretized than that.VagabondSpectre

    Not really -- because we don't know what "it" really is. What's evolving, exactly? If you believe something, some discrete "enterprise" or "activity" has evolved which we label "science," then that's one way to look at it -- but again we're left with "What is science?" Well, if we take a look at the beginning of modern "science," in Copernicus and Galileo, and even in Newton, you'll find lessons that don't fit your current conceptions very well at all. Take Liebniz, even -- was he not a scientist? Was he a philosopher?

    Remember, these categories didn't exist to Liebniz, Newton, or even Kant. They certainly didn't matter to Democritus, Archimedes, Aristarchus, or Euclid.

    Before the notion experimental validation is how we should test scientific models really took hold,VagabondSpectre

    And when was that, exactly? When did this notion take hold? The 17th century? 18th? 19th? Are you really so certain it was this notion that drove progress? So what was happening in the Late Middle Ages and Renaissance? Or the Islamic Golden Age? Or Ancient Greece? Or even in Mayan astronomy, Babylonian mathematics, and Egyptian engineering? Was all this activity non-science?

    I have given you a compressed definition of what "nature" means in terms of modern science. Nature is the way things are as revealed by controlled and repeated experimentation and testing (consistent observations and predictions)VagabondSpectre

    The "way things are"? So only the things that "science" tells us are "real" truly "are"?

    Importantly, nature is the thing science is attempting to model;VagabondSpectre

    That's fine, but what is it? What is nature? Just "whatever it is science studies"? That's not saying that much, although that seems to be the case. Natural philosophers did that very thing.

    it cannot reason from nature or appeal to nature (the naturalistic fallacy).VagabondSpectre

    That's not the naturalistic fallacy. The naturalistic fallacy, as Moore formulated it, has to do with justifying moral claims on the basis of what's natural.

    Speculating about the nature of things (meaning to say, making untested or unstable assumptions) is one of the cardinal differences between a primitive and error prone ontology like Aristotelian teleology (haphazardly assigning qualities, functions, purposes, etc...) and the modern scientific method.VagabondSpectre

    No, this is completely wrong.

    "Speculation" about things -- thinking about them, trying to understand them, formulating hypotheses, making guesses, conducting creative thought experiments, etc. -- are simply what human beings have been doing for millennia. They go down many blind alleys, they're often wrong, theories get overturned and adapted, etc. This is true today as well -- we're no doubt wrong about many, many things. The Standard Model, quantum mechanics, mathematics, atomic theory, the Big Bang, not to mention neurology, psychology, and sociology, will go through many changes in the centuries to come. To look back on the Greeks and dismiss them as primitive, along with their "error-prone ontology" (whatever this means), is simply a common mistake. It's one you can make only if you truly believe there's a discernible and clearly-defined boundary between OUR "science" and superstitious speculations of the past.

    Again, it's simply not that easy -- and completely unsupported by historical evidence.

    The move from philosophical speculation to more strict empiricism is why modern science actually got somewhere.VagabondSpectre

    Eh, this is nonsense I'm afraid, and you know it. Just think about it for a minute. Take an example I gave: Aristarchus. Was he wrong? Was that not science? Was that superstition? Or maybe just "luck"? Was that not "getting anywhere"? What about Democritus's theory of atoms? Was Euclid a superstitious man? Did the Phoenician sailors, using the stars as navigation, get lucky in their calculations? Ditto the Egyptians, with their elaborate constructions of the pyramids, or the Sumerians and their ziggurats?

    All these primitive, superstitious people -- without our modern sensibilities and "method" of science -- seemed to "get somewhere," I'd say. In fact they laid the foundations for much of what we currently know.

    I don't say this facetiously, it's the very crux of science itself: make no starting assumptions about what something is or the way things are,VagabondSpectre

    Science has no starting assumptions? That's just nonsense. See below.

    Predictive power is ultimately the only signal of truth that we have. Comparing this to the sciences of old, much of it is comforting self-delusion and window-dressing derived to fit metaphysical prior assumptions.VagabondSpectre

    What "science of old" are you referring to, exactly? I imagine for every example of views later proven false, like Ptolemy's, you can also produce many from the 19th century that were completely false (miasma theory), or the 18th (corpuscularianism), and on into the contemporary era. There's plenty of self-delusion going on right now, undoubtedly.

    But to say science is "assumption"-free is just utter nonsense. Scientists are at least assuming there's nature to be studied (or the cosmos, or the "universe,") and that it follows comprehensible causal patterns. So there are assumptions of intelligibility, predictability, causality, and spatio-temporal relations as well -- before we even set about researching or attempting to understand some aspect or another.

    Furthermore, what matters when trying to understand something is questioning, interrogating, and interpreting. You don't start from "nothing" and simply read off reality from nature. There's a contribution of our thinking minds, in how we perceive, categorize, and interpret the world.

    We should really start to break down these false and rather simplistic notions like "the scientific method" that distinguishes "science" from "non-science." Better to say that there is, and always have been, human beings who are curious and interested in understanding the world. We do it with creativity, with our language and logic and mathematics, with observing and experimenting, with theorizing, with speculation, with collaboration with others, and with simple trial and error. We make "progress" (which is value and goal-dependent), we create new technologies, we document new findings and build off of them, and on and on.

    We can go back and forth between "philosophy" and "science" if we want to define things in a certain way. As Bertrand Russell said once:

    "Roughly you'd say, that science is what we know and philosophy is what we don't know. Questions are perpetually crossing over from philosophy into science as knowledge advances. All sorts of questions that used to be labeled philosophy are no longer so labeled."

    That's fine, provided we want to define things this way.
  • VagabondSpectre
    1.9k
    Galileo didn't conduct any experiments besides thought experimentsXtrix

    This is a ludicrous assertion. He conducted many thought experiments, yes, and he even got stuff wrong, but he was also a champion of observation and the application of maths to those observations. From wiki:

    His work marked another step towards the eventual separation of science from both philosophy and religion; a major development in human thought. He was often willing to change his views in accordance with observation. In order to perform his experiments, Galileo had to set up standards of length and time, so that measurements made on different days and in different laboratories could be compared in a reproducible fashion. This provided a reliable foundation on which to confirm mathematical laws using inductive reasoning.

    No, we don't. It's just not so simple, otherwise there wouldn't be work in the philosophy of science.Xtrix

    There's no work in the philosophy of science. It's already a matured school, and scientists at large hardly even use it. There's need for the application of philosophical thought within many of the theoretical fields, especially where exploring new ideas to test is needed, but ultimately those hypotheses must be fed through the bull-shit chipping bottle-neck of experimentation and reliable prediction.

    To say that it's just a matter of empirical observation and experimentation does little good -- that's natural philosophy, too. The Greeks were doing that as well. Is archeology not a science because it doesn't have "sufficient experimental predictive power"? What about genetics or evolutionary biology?Xtrix

    Why didn't the Greeks get anywhere interesting beyond apriori mathematics and some masonry skills? They had some bright people, but the limited information they had - the limited observations they could make - resulted in a worldview that was perforated with bull shit.

    Archeology is an interesting field, and archeologists readily accept that the inductions they make are more precariously hinged on available evidence (like the ancient Greeks they have much more limitations, but unlike the Greeks they understand this fact and refrain from bullshitting before the evidence arrives. If you're interested to know how archeology can yield predictive power, one facet is the ability to anticipate the presence and content of human artifacts. By having a model of human movement and evolution overtime, we may become able to make inferences about where ancient human groups are likely to have localized. We can also anticipate how one group may have changed over time by understanding how another group migrated into or through their lands. Projecting backwards is more difficult in terms of confirmation (because when we project forward, the future itself becomes confirmation).

    And when was that, exactly? When did this notion take hold? The 17th century? 18th? 19th? Are you really so certain it was this notion that drove progress? So what was happening in the Late Middle Ages and Renaissance? Or the Islamic Golden Age? Or Ancient Greece? Or even in Mayan astronomy, Babylonian mathematics, and Egyptian engineering? Was all this activity non-science?Xtrix

    Some of it may have been downright scientific, but if we're talking about the modern body of scientific knowledge, then it all needs to be checked by modern standards.

    I don't have the answer to exactly when modern science was developed; it's still under development, and those developments come in the form of discoveries which open up new models, tools, and methods of observation and prediction.

    Not really -- because we don't know what "it" really is. What's evolving, exactly? If you believe something, some discrete "enterprise" or "activity" has evolved which we label "science," then that's one way to look at it -- but again we're left with "What is science?" Well, if we take a look at the beginning of modern "science," in Copernicus and Galileo, and even in Newton, you'll find lessons that don't fit your current conceptions very well at all. Take Liebniz, even -- was he not a scientist? Was he a philosopher?

    Remember, these categories didn't exist to Liebniz, Newton, or even Kant. They certainly didn't matter to Democritus, Archimedes, Aristarchus, or Euclid.
    Xtrix

    It's like you're objecting to the existence of a discrete contemporary organism by pointing to an evolutionary lineage of predecessors.Yes, science evolved, no, modern science is not constrained by its prototypical origins.

    Speculation" about things -- thinking about them, trying to understand them, formulating hypotheses, making guesses, conducting creative thought experiments, etc. -- are simply what human beings have been doing for millennia. They go down many blind alleys, they're often wrong, theories get overturned and adapted, etc. This is true today as well -- we're no doubt wrong about many, many things. The Standard Model, quantum mechanics, mathematics, atomic theory, the Big Bang, not to mention neurology, psychology, and sociology, will go through many changes in the centuries to come. To look back on the Greeks and dismiss them as primitive, along with their "error-prone ontology" (whatever this means), is simply a common mistake. It's one you can make only if you truly believe there's a discernible and clearly-defined boundary between OUR "science" and superstitious speculations of the past.

    Again, it's simply not that easy -- and completely unsupported by historical evidence.
    Xtrix

    When a good empiricist speculates, they do it for practical reasons, and they do not go on to accept the speculation without adequate experimental validation. This practice is what helps to ensure that scientifically accepted "facts" are very robust and consistent. This makes them fundamentally usable as building blocks for more complex models (and so on). Yes we get things wrong, but you're fundamentally misunderstanding (or just not perceiving) that the modern science is an observation/experiment/prediction demanding crucible compared to the science of old.

    Eh, this is nonsense I'm afraid, and you know it. Just think about it for a minute. Take an example I gave: Aristarchus. Was he wrong? Was that not science? Was that superstition? Or maybe just "luck"? Was that not "getting anywhere"? What about Democritus's theory of atoms? Was Euclid a superstitious man? Did the Phoenician sailors, using the stars as navigation, get lucky in their calculations? Ditto the Egyptians, with their elaborate constructions of the pyramids, or the Sumerians and their ziggurats?

    All these primitive, superstitious people -- without our modern sensibilities and "method" of science -- seemed to "get somewhere," I'd say. In fact they laid the foundations for much of what we currently know.
    Xtrix

    Democritus didn't lay the foundation for atomic science so much as he happened to guess more aptly than his peers. Until we actually get to modernity, atom's might as well have been Horton's Who's.

    What are your intentions in trying to compare modern scientific standards to ancient ones? They're vastly different.

    Science has no starting assumptions? That's just nonsense. See below.Xtrix

    Yes, the problem of induction is a thing. "How do we know that just because something has given us predictive power in the past that it will give us predictive power in the future"?... This is not a question that concerns me...

    What "science of old" are you referring to, exactly?Xtrix

    Specifically, pertaining to the method itself, where rigid testability and reproducibility standards do not exist (i.e: where speculation reigns)...
  • Mikie
    6.2k
    This is a ludicrous assertion. He conducted many thought experiments, yes, and he even got stuff wrong, but he was also a champion of observation and the application of maths to those observations.VagabondSpectre

    How could it be otherwise? Of course he was a champion for observation, calculation, and precise reasoning. This has nothing to do with the myths of dropping balls from Pisa or experimenting with a frictionless plane, for example. I find it odd that you declare it a "ludicrous assertion" yet don't provide one example of a Galileo experiment, even in your citing Wikipedia. If he performed one, that's fine -- maybe he did. But the major breakthroughs he made were mainly thought experiments. This is not meant as a criticism of Galileo.

    But more importantly, this statement of mine was in response to your claim about experimentation, and so I think you're very much missing the point.

    No, we don't. It's just not so simple, otherwise there wouldn't be work in the philosophy of science.
    — Xtrix

    There's no work in the philosophy of science. It's already a matured school, and scientists at large hardly even use it.
    VagabondSpectre

    There's plenty of work in the philosophy of science, even today, as you know. There's things published all the time. Whether "scientists at large" (not sure what this means) "use it" (use what, exactly?) is irrelevant: I'm talking about the philosophy of science. That would indicate it's a job for philosophers, not scientists. I realize most scientists regard philosophy with a great deal of contempt, in fact, so it wouldn't surprise me if they don't bother with the philosophy of science at all.

    Why didn't the Greeks get anywhere interesting beyond apriori mathematics and some masonry skills?VagabondSpectre

    ? Masonry skills? "Apriori mathematics"? What are you talking about? Your history is very confused.

    They had some bright people, but the limited information they had - the limited observations they could make - resulted in a worldview that was perforated with bull shit.VagabondSpectre

    So they were just like us, in other words. Plenty of bullshit everywhere -- as many scientists admit freely -- that we're simply not yet aware of.

    But we have "some bright people," too.

    Archeology is an interesting field, and archeologists readily accept that the inductions they make are more precariously hinged on available evidence (like the ancient Greeks they have much more limitations, but unlike the Greeks they understand this fact and refrain from bullshitting before the evidence arrives.VagabondSpectre

    Oh, is that what the Greeks did? Good to know! In fact I know a lot of archaeologists who do a lot of speculations -- historians too. Why? Because the data isn't there yet. It doesn't stop them from speculation and making educated guesses. Take the "invasion of the Sea Peoples," for example. We don't know what happened between 1200 and 800 BCE, but there is plenty of speculation (and most, we will find, is probably completely wrong) that people in the future will call "bullshit."

    And when was that, exactly? When did this notion take hold? The 17th century? 18th? 19th? Are you really so certain it was this notion that drove progress? So what was happening in the Late Middle Ages and Renaissance? Or the Islamic Golden Age? Or Ancient Greece? Or even in Mayan astronomy, Babylonian mathematics, and Egyptian engineering? Was all this activity non-science?
    — Xtrix

    Some of it may have been downright scientific, but if we're talking about the modern body of scientific knowledge, then it all needs to be checked by modern standards.
    VagabondSpectre

    Modern standards like what?

    To argue we are "special" somehow is a very common stance. Every culture does it. So you're in good company. Unfortunately, the historical evidence just doesn't support it.

    You're seemingly ignorant of history, I'm afraid. No offense meant. It's just worth digging a little deeper, otherwise we get a kind of tunnel vision where we believe we've reached a pinnacle of human progress.

    I don't have the answer to exactly when modern science was developed;VagabondSpectre

    Well ask yourself why. The answer may be interesting.

    Science historians often begin the era of modern science with Copernicus or Galileo, which is why I use them as examples, along with Newton -- often acknowledged as one of the greatest scientists. These guys did lots of purely thought experiments, used mathematics (or invented some of their own), were completely wrong about a lot of things, were Christians (in Newton's case, fairly devout), etc. They also didn't identify as "scientists," but as "natural philosophers." So were they not scientists? Were they not doing science? If they were, then so were many of the Greeks, like Aristarchus. If they weren't, then when does "science" really begin? The 19th and 20th century? With Galton, Maxwell, Einstein, Bohr perhaps?

    I think you see the issue.

    It's like you're objecting to the existence of a discrete contemporary organism by pointing to an evolutionary lineage of predecessors.Yes, science evolved, no, modern science is not constrained by its prototypical origins.VagabondSpectre

    No, there's no "objecting" to it -- I'm trying to understand it, just as I would a "discrete organism" by both understanding its current iteration and its historical evolution. Who said anything about being "constrained"? Are birds "constrained" by the fact that they've evolved from dinosaurs?

    When a good empiricist speculates, they do it for practical reasons, and they do not go on to accept the speculation without adequate experimental validation.VagabondSpectre

    In response specifically to the "experimental" part: What about astronomers? Most could not (and still cannot) conduct any direct experiments whatsoever. In fact a great deal has been learned without any experiments at all -- just careful observation and reasoning.

    Regardless, yes of course they don't just "accept the speculation." The Greeks didn't do that, the Muslims didn't do it, the Babylonians didn't do it. You keep trivializing these people as "primitive" and "superstitious," but I think that's a big mistake. You've studied history, yes? How can you not be impressed by the astronomy of the Maya or Babylonian astronomy and mathematics? Of course looking back over a distance of millennia makes us view some of the beliefs and practices of these cultures to be barbaric -- but that's quite apart from the obvious achievements.

    Yes we get things wrong, but you're fundamentally misunderstanding (or just not perceiving) that the modern science is an observation/experiment/prediction demanding crucible compared to the science of old.VagabondSpectre

    And Archimedes and Aristarchus didn't demand these things? Of course they did. In many cases they even carried out experiments, too. Now it's true that prior to telescopes and microscopes and other technologies, only educated "guesses" and speculations and theorizing were possible -- but we're often in the same position today with many things, like linguistics: we can't conduct the types of experiments on humans that would be necessary for understanding how language develops. Or when it comes to abiogenesis or the reasons for the Bronze Age collapse, or a host of other things. So what? That doesn't make us primitive -- it means we're struggling to understand the world and working on more creative ways to attack a problem or question.

    What are your intentions in trying to compare modern scientific standards to ancient ones? They're vastly different.VagabondSpectre

    You keep saying that, and keep failing to show what exactly those standards are and how they differ. When you do -- for example, observation and experimentation and prediction -- it's shown that the ancients were often (not always) doing the same thing. And not just the ancients, but the people of the Middle Ages, in the Renaissance, and in the modern era.

    Things certainly change -- theories change, technology changes, etc., but what we've ended up with in our modern concept as an "enterprise" or a "community" of people in labcoats carefully conducting experiments in a laboratory (or some portrayal like that) is a limited and rather narrow view indeed. I'm sure you would agree. So if it isn't that exactly, then why bother pretending that it's a clear-cut, separate "department" from natural philosophy?

    Yes, the problem of induction is a thing. "How do we know that just because something has given us predictive power in the past that it will give us predictive power in the future"?... This is not a question that concerns me...VagabondSpectre

    I never once mentioned the problem of induction. Nor does it interest me.

    What "science of old" are you referring to, exactly?
    — Xtrix

    Specifically, pertaining to the method itself, where rigid testability and reproducibility standards do not exist (i.e: where speculation reigns)...
    VagabondSpectre

    And we're back to the "method" again. It's true that modern science grew up with inductive logic and the "inductive method," especially in the writings of Bacon. But it's been shown over and over again to be a myth. And that's not my view -- that's coming from many creative scientists and philosophers of science. I think they're on to something.

    The bottom line is this: "science" as a human activity is a kind of inquiry and questioning of the world, asking basic questions about it in an attempt to understand it somehow. This rational inquiry used to be called "natural philosophy" before even the word "science" (mid-14th century) appeared. This is not to say what we presently mean by "science" is the same as what was meant in the 19th century, or that there's no differences at all between "philosophy" and "science," or that there aren't clear examples of the "scientific method." I think the analogy of an organism is exactly right, say a dog: we know it's different than it was a 100K years ago (a wolf), we know it's evolved, and yet here it is. Like anything in the world, there are aspects that stay the same and aspects that are changed.

    The point of this thread is to analyze, in particular, the ontology of science in the sense of studying "nature" or the "physical world," which is what is very often claimed. The question then is "What is nature/what is the physical?" Even if we accept that science is a completely distinct human enterprise, characterized by a special method, which is where we've digressed, we are still in the position of having to explain its ontological basis. Why? Why is this important? Because unexamined conceptions, especially fundamental ones, can lead us down blind alleys for decades without us realizing it. To think of "nature" as matter in motion, or the "physical" as anything "material" (like atoms), etc., has real world consequences for inquiry and research. If we really don't know what these terms mean, can't define them precisely, or define them in such a way as to be absurd, then we're potentially treading water in many pursuits. It doesn't mean we stop everything until we "at last" have discovered the "true" foundations of science, but it also doesn't mean we simply gloss over examining them because it's "highfalutin philosophical mumbo-jumbo."

    The origin of both "nature" and "physical" is the Greek term phusis. All of Western thought has been shaped by the Greeks, from the Romans to the Christians, to the Scholastics and Descartes, to Galileo and Newton, to Einstein, Heisenberg and Schrodinger, up to Sagan and Hawking and Dawkins and Greene and Chomsky.

    All of them, whether "philosophers" or "mathematicians" or "scientists" or whatever, continue to operate in an ontology that had its inception in Greece -- in the thought of Anaximander, Parmenides, Heraclitus and, later, Plato and Aristotle. These latter two men's influence, especially, is impossible to ignore. To dismiss it all as historical rambling, or to wave off the Greeks as "primitive, superstitious speculators," is pure insanity. In my view.
  • VagabondSpectre
    1.9k
    How could it be otherwise? Of course he was a champion for observation, calculation, and precise reasoning. This has nothing to do with the myths of dropping balls from Pisa or experimenting with a frictionless plane, for example. I find it odd that you declare it a "ludicrous assertion" yet don't provide one example of a Galileo experiment, even in your citing Wikipedia. If he performed one, that's fine -- maybe he did. But the major breakthroughs he made were mainly thought experiments. This is not meant as a criticism of Galileo.

    But more importantly, this statement of mine was in response to your claim about experimentation, and so I think you're very much missing the point.
    Xtrix

    Experimentation (reliable prediction) is still the bottle-neck through which Galileo's assertions must pass to enter the modern body of scientific knowledge. Yes he conducted many real experiments, notably with pendulums if you must have an anecdote.

    I'm not saying that thought experiments have no place in doing science, I'm saying that the crux of modern science (again, why it has been successful) is the demand for actual observable experiments to confirm the prior speculations.

    here's plenty of work in the philosophy of science, even today, as you know. There's things published all the time. Whether "scientists at large" (not sure what this means) "use it" (use what, exactly?) is irrelevant: I'm talking about the philosophy of science. That would indicate it's a job for philosophers, not scientists. I realize most scientists regard philosophy with a great deal of contempt, in fact, so it wouldn't surprise me if they don't bother with the philosophy of science at all.Xtrix

    What is philosophy of science in your view?

    ? Masonry skills? "Apriori mathematics"? What are you talking about? Your history is very confused.Xtrix

    So the Greeks never constructed any lasting stone monuments? They didn't innovate any fundamental mathematical theorems? Care to offer a correction?

    https://en.wikipedia.org/wiki/Ancient_Greek_architecture#Masonry

    https://en.wikipedia.org/wiki/Pythagoras

    So they were just like us, in other words. Plenty of bullshit everywhere -- as many scientists admit freely -- that we're simply not yet aware of.

    But we have "some bright people," too.
    Xtrix

    If modern science was full of shit, then satellites would fall out of the sky, smart phones would stop working, vaccines would not work, the new Tesla autopilot would crash more often than humans, etc...

    The whole point is to reduce the bull-shit; that's the scientific shtick. Making a relativistic comparison to ancient bull-shit and saying "oh sure, everything we know now is probably bull shit" is fine, but the evidence is stacked against you. We know more than we did, what we know now is more reliable, and less likely to turn out inaccurate (the benefits of reproducible experimentation...). Unless the universe itself changes, Newton's laws of motion are not going to suddenly become useless for predicting the movement of masses through space. Einsteins general relativity isn't going to suddenly stop providing accuracy increasing tweaks and depth to the Newtonian system, etc...
  • waarala
    97
    Heidegger's point is that the birth of modern science or Galileo had a certain "project" (design, Entwurf) on the basis of which world was disclosed in such manner that its mathematical explanation became possible. Behind mathematical natural science there is operating certain understanding of the world or being. This resembles Kant's transcendental sphere which makes objective science or objects possible. For Heidegger "transcendental" is not logical (in kantian sense) sphere or forms of thought derived from the logical categories, but it is the understanding of being. Understanding of being is essentially historical. There is "great events" that change this understanding and consequently our whole conception of the world. Pre-socratics were an event, Aristotle-Plato "complex" was an event, Galileo-Descartes-Newton was an event. All these events changed our understanding of the being. To understand our modern technical-scientific world we have to understand the "project" in which Galilean understanding operates. Later Heidegger's conception of the modern "(un)being" (negative connotation) as "Ge-stell" ("Enframing") aims precisely to this understanding when it "analyses" our Galilean-modern "world view". For Heidegger there is no progression, evolution or enlightenment involved here. These historical events are just "fateful" transformations in our world view. What we estimate as "knowledge" or truth changes and which entails that our whole life gets a new direction. (On the other hand, the theme of Being (and its relation to Truth) as underlying common theme in all these alterations. Somehow being or how being has not been reflected or how it has been left unthought affects all these events. All presumed radically new beginnings have been operating on this "hidden", unthought ground. They have been guided by certain horizont which keeps these fateful (or seemingly accidental) transformations within certain limits. )
  • Mikie
    6.2k
    I'm not saying that thought experiments have no place in doing science, I'm saying that the crux of modern science (again, why it has been successful) is the demand for actual observable experiments to confirm the prior speculations.VagabondSpectre

    That's fine -- but why only "modern"? That's my point. It's not that we're not very successful, or that observation, experimentation, and predictiveness doesn't exist or isn't important. Of course all of that is very important indeed, and a key feature in the success or failure of achieving our goals. The point is that there is no clear separation between this and anything going on over the last 4,000 years. There are plenty of examples, at any period, of blind alleys and mistakes -- in every field; there's also examples of huge breakthroughs and significant progress. So in ancient Greece, for every Aristarchus there is a Ptolemy. In the 19th century, for every Darwin there is a Lamarck. Then there are many grey areas -- the rethinking of Newtonian physics through Einstein's relativity, as you mentioned.

    As long as humans are attempting to understand the world rationally, there will always be mistakes and successes. When we're completely irrational, that's not only no longer "science" but no longer "philosophy" either -- it's just nonsense. But even here, in the case of folk science, myth, superstition, and religion, there's a lot of overlap. Remember that chemistry emerged from alchemy, that astronomy evolved from astrology, etc.

    So yes, logic, reasoning, rationality -- attention to details, thought experiments, speculations, theory-creation, testing hypotheses, careful observation, and repeatable experiment are all very important pieces of our attempts to understand the world of nature. No question. Whether we call this activity "natural philosophy" as Newton, Galileo, Descartes, and Kant did, or we call it "science" in the 21st century doesn't rally matter much. The goal is the same: understand the world. This has been true for a long, long time, and there are plenty of historical examples.

    The problem your having is in believing that science owes its success to a special method, which we can understand and define, and which scientists can use consciously and deliberately, and that this is a relatively recent invention which sets "science" apart from other endeavors, all of which now become "non-science" or perhaps "quasi-science." But when one starts reading history or even the contemporary era, one finds that this distinction really starts to break down, that it's harder than you think to define, that while there are many examples where it fits there are many others where it doesn't fit, etc.



    But my point of this thread wasn't to digress into the definition of science or the scientific method, per se, but rather to assume that people (like yourself) do indeed believe there is a useful and relatively clear demarcation and try to figure out what the historical basis for its ontology is. By "ontology" I mean its perspective on what the world 'is,' about what exists -- about being. When you ask that question, you often find that the answer given is "The universe, or nature as a whole, is all that is." Then when you ask what the "universe" is, you're given answers about matter, motion, and forces -- concepts from physics and chemistry.

    Those fields tell us about atoms and molecules, about energy and mass, about causes and effects, and about space and time using (mainly) specialized, technical language, logic, and mathematics -- backed up with empirical evidence from observation and experiments.

    And that's where we stand currently. If science interprets "all there is" (being as a whole) as, essentially, "physical nature," then that's a very definite worldview -- a very important ontology. It's opposed, say, by Christian ontology where all that is, all of being, is "creation" and "God."

    So my point is: let's look at the words and see if their history through the ages gives us an clues or illuminates our current, powerful (and dominant, at least among educated people) understanding of being. Maybe it does, maybe it doesn't. I personally think it does, and helps us become a little less dogmatic and guards against the pitfalls of "scientism" and, more importantly, a kind of nihilism that Nietzsche analyzed and warned us about. Why is this, in turn, important? I've already written enough, so I won't bore you further, but it turns out this has definite real-world consequences which we all are currently living in.

    If modern science was full of shit, then satellites would fall out of the sky, smart phones would stop working, vaccines would not work, the new Tesla autopilot would crash more often than humans, etc...VagabondSpectre

    I don't think we're full of shit. I don't think the Greeks were full of shit either, though. My point was that there are examples of people getting things very right and getting things very wrong. Perhaps it's true we get less wrong now, but that's not what scientists tend to think -- they acknowledge that there is still much we don't know, we're probably on the wrong track, that hundreds of years from now what we know currently will be outdated, etc. A little humility is required (which is easy when studying history and then projecting, say, 2000 years into the future). I would hate to think people 2000 years from now would look back and only see our flaws and mis-steps and thus label us "primitive" people who were "full of shit." I don't think that's entirely fair.

    The whole point is to reduce the bull-shit; that's the scientific shtick. Making a relativistic comparison to ancient bull-shit and saying "oh sure, everything we know now is probably bull shit" is fine, but the evidence is stacked against you.VagabondSpectre

    I agree. I would just take issue with the sweeping claim of "ancient bull-shit," however. Does that include all of Aristotle's work? Does it include Aristarchus and Archimedes and Euclid? Apparently not. So at least we can admit it wasn't all bullshit, just as it isn't all bullshit now (which was the point I was trying to make by giving examples of how we currently have plenty of bullshit too; it wasn't to suggest we're completely full of shit).
  • VagabondSpectre
    1.9k
    And that's where we stand currently. If science interprets "all there is" (being as a whole) as, essentially, "physical nature," then that's a very definite worldview -- a very important ontology. It's opposed, say, by Christian ontology where all that is, all of being, is "creation" and "God."Xtrix

    You're not describing a modern scientific attitude or position though (science accepts that the jury is still out on "all there is"). Asking for some kind of grand definition for everything is not a scientifically coherent question.

    It's not a very definite worldview....

    So my point is: let's look at the words and see if their history through the ages gives us an clues or illuminates our current, powerful (and dominant, at least among educated people) understanding of being. Maybe it does, maybe it doesn't. I personally think it does, and helps us become a little less dogmatic and guards against the pitfalls of "scientism" and, more importantly, a kind of nihilism that Nietzsche analyzed and warned us about. Why is this, in turn, important? I've already written enough, so I won't bore you further, but it turns out this has definite real-world consequences which we all are currently living inXtrix

    You keep suggesting that modern scientists "conception of being" hinges on the developmental history of science, but what if someone creates a brand new theory of matter? In order to understand the cutting edge, do we actually need to examine the hilt or the pommel? In the case that modern models deviate entirely from models of old, we don't actually need the models of old to comprehend the new, but we absolutely need to examine the new in and of itself.

    When it comes to scientific over-confidence, there's no broad heuristic which you can derive to safely make a rule of thumb. Some scientific models are wrong, and they will be changed, and many individuals and scientists are vastly over-confident in their models. This isn't an inherent feature of science though; some scientists aren't over-confident, and some models may never be changed. To determine where the over-confidence lies, it is 100% required to address the contemporary models and evidence themselves, otherwise you're reasoning about the way things are without actually looking at the way things are.

    Perhaps it's true we get less wrong now, but that's not what scientists tend to thinkXtrix

    Of course it's what scientists tend to think. If scientists did not believe they could get less wrong in the future, they would not believe in that science could progress.

    All scientists believe that we get less wrong now than in the past (or at least, what we got wrong in the past, we get less wrong today).

    Think about this for a second... If science has no progressed since Aristotle, how pathetic does that make modern science and scientists?

    hey acknowledge that there is still much we don't know, we're probably on the wrong track, that hundreds of years from now what we know currently will be outdated, etcXtrix

    What do you mean "probably on the wrong track"?

    Are you aware of the empirical tracks that science at large is presently mapping?

    You're making an almost purely relativistic comparison. "Science today is not perfect, science yesterday was not perfect, therefore science does not progress, it will always be the same, and what we know now is just as wrong as when we read the portents from sheep guts".
  • Mikie
    6.2k
    You're not describing a modern scientific attitude or position though (science accepts that the jury is still out on "all there is"). Asking for some kind of grand definition for everything is not a scientifically coherent question.VagabondSpectre

    No one is asking for a "grand definition of everything." Nor have I said that -- not once.

    It's not a very definite worldview....VagabondSpectre

    It most certainly is, as I have repeatedly explained.

    You keep suggesting that modern scientists "conception of being" hinges on the developmental history of science,VagabondSpectre

    I'm not suggesting this. Nor have I stated it -- not once.

    So my point is: let's look at the words and see if their history through the ages gives us an clues or illuminates our current, powerful (and dominant, at least among educated people) understanding of being. Maybe it does, maybe it doesn't. I personally think it does, and helps us become a little less dogmatic and guards against the pitfalls of "scientism" and, more importantly, a kind of nihilism that Nietzsche analyzed and warned us about. Why is this, in turn, important? I've already written enough, so I won't bore you further, but it turns out this has definite real-world consequences which we all are currently living inXtrix

    but what if someone creates a brand new theory of matter? In order to understand the cutting edge, do we actually need to examine the hilt or the pommel? In the case that modern models deviate entirely from models of old, we don't actually need the models of old to comprehend the new, but we absolutely need to examine the new in and of itself.VagabondSpectre

    I never mentioned anything about old or new models.

    Different explanatory theories arise and pass; sometimes they're based on older theories, sometimes a synthesis, sometimes completely novel.

    But to answer: No, one doesn't need to understand the history of something in order to practice it. Nor do they even, for that matter, have to understand or explain the theoretical basis for it. You don't have to be a baseball historian or understand the physics of swinging bats to play baseball. I suspect many researchers don't have much of an idea about the philosophical, historical, and theoretical underpinnings of their specific, technical activities either.

    That's not a problem really, but it's also no surprise that those who tend to make the biggest contributions are the ones who do bother with philosophy -- Einstein is a prime example, although there are many others.

    But all this is, again, a digression that really misses the point.

    Perhaps it's true we get less wrong now, but that's not what scientists tend to think
    — Xtrix

    Of course it's what scientists tend to think. If scientists did not believe they could get less wrong in the future, they would not believe in that science could progress.
    VagabondSpectre

    But your notion that science "progresses" is itself a picture that isn't really justified. In some ways it does, in others it doesn't. But in any case, the best scientists are well aware that theories today will morph and adapt in the future -- that's just basic. It's pure hubris to assume otherwise.

    That being said, to say we get "less wrong now" than in the past is impossible to measure, so there's no sense talking about it. Were Humphry Davy, Faraday, and their contemporaries "less wrong than right" compared to our contemporaries today? Who knows. In fact it's almost certain there are far more hypotheses that aren't confirmed by the data in today's world simply by the sheer amount of what's being undertaken. But who cares? That's not how science is judged. The activity of trying to understand the world rationally continues, regardless.

    All scientists believe that we get less wrong now than in the past (or at least, what we got wrong in the past, we get less wrong today).VagabondSpectre

    No, they don't. In fact the statement is borderline incoherent. See above.

    Think about this for a second... If science has no progressed since Aristotle, how pathetic does that make modern science and scientists?VagabondSpectre

    Yes, if one thinks of the "progress" of science as akin to climbing a mountain or filling out a crossword puzzle -- as "accumulation" of some kind. True, that's how the history of science looked for nearly 300 years until Einstein, and I'm sure you'll find many who still think that way. But that doesn't mean we have to take it seriously.

    hey acknowledge that there is still much we don't know, we're probably on the wrong track, that hundreds of years from now what we know currently will be outdated, etc
    — Xtrix

    What do you mean "probably on the wrong track"?
    VagabondSpectre

    Just what I said. To take one example, quantum mechanics and relativity will doubtlessly in the future be either brought together or re-interpreted somehow, or subsumed under a newer theory. And so on forever, really. Much of all of this has to do with the questions we ask, the problems we face as human beings -- and that in turn is dependent on our values, our goals, our interests, etc.

    Are you aware of the empirical tracks that science at large is presently mapping?VagabondSpectre

    Depends on what "empirical tracks" are, and what field you're talking about.

    You're making an almost purely relativistic comparison. "Science today is not perfect, science yesterday was not perfect, therefore science does not progress, it will always be the same, and what we know now is just as wrong as when we read the portents from sheep guts".VagabondSpectre

    Well needless to say I don't believe any of that, as you know. If you made even a slight effort to understand by taking a few moments to think, instead of reacting, you'd see that fairly easily. In fact your apparent emotional reaction and frustration with all of this is in itself interesting.
  • VagabondSpectre
    1.9k
    You're not describing a modern scientific attitude or position though (science accepts that the jury is still out on "all there is"). Asking for some kind of grand definition for everything is not a scientifically coherent question. — VagabondSpectre


    No one is asking for a "grand definition of everything." Nor have I said that -- not once.

    It's not a very definite worldview.... — VagabondSpectre


    It most certainly is, as I have repeatedly explained.
    Xtrix

    Here is where I get turned around. First you aver that scientists admit a god of nature as some kind of serious and relevant sentiment that can help us understand modern science (as if it is an operant world-view; as if it contextualizes the entirety of it)....

    To then contrast this directly with the sentiments of old (namely, that nature itself was the expression of some intelligent creator that imbued everything with order and purpose), makes the above interpretation harder to avoid. In so far as we have abandoned superstitious and ungrounded appeals and hypotheses such as those, then yes, we can understand modern science as differing from the kind of thing that Aristotle was engaged in. It's a kind of "actually check and let nature be the judge" attitude.

    But you actually are trying to say that modern science must be the same thing that the ancients were engaged in, because there is inquiry involved in both, and because there are some etymological relationships....

    I think I understand what you're trying to do: you are trying shed light on the inherent epistemological limitations (the doubts) of modern science by showing how it is similar to previous and falliable phases of human inquiry. Philosophically you're right, but scientifically you're wrong; the scientific method is literally built around the inductive method, and has made the relationship between certitude and existing theories a core feature of what allows science to adapt.

    Aristotle really did want to explain everything; to put everything into a neat and discrete category; ordered and comprehensible. Modern science reserves this attitude for secretive wet dream. It's hubris. Instead it admits that it is woefully incomplete, and instead of judging itself by all or nothing standards, it uses experimental reliability (predictive power) as a guide. This is something that ancients really had a hard time keeping faith with (they tended to accept whatever sounded the most persuasive, fallacy or no). Not having such strongly grounded fundamentals (see:modern physics vs ancient stories about existence and stuff, or see ancient astrology vs modern astronomy, etc...), it was simply not possible to resist whatever best and explanation they happened to have at the time. Science in its modern incarnation started with an admission of said uncertainty.

    But your notion that science "progresses" is itself a picture that isn't really justified. In some ways it does, in others it doesn't. But in any case, the best scientists are well aware that theories today will morph and adapt in the future -- that's just basic. It's pure hubris to assume otherwise.Xtrix

    I can basically defeat this sentiment merely by saying "computers". By what standard has modern science not progressed?

    In any case, the progression of science along the lines of utility, reliability, and predictive power cannot be denied. The entire thread seems to sniff in this direction though... That science isn't so great; that's it's "just the same old _____".

    That being said, to say we get "less wrong now" than in the past is impossible to measure, so there's no sense talking about it. Were Humphry Davy, Faraday, and their contemporaries "less wrong than right" compared to our contemporaries today? Who knows. In fact it's almost certain there are far more hypotheses that aren't confirmed by the data in today's world simply by the sheer amount of what's being undertaken. But who cares? That's not how science is judged. The activity of trying to understand the world rationally continues, regardless.Xtrix

    I want to highlight the last sentence in this:

    "The activity of trying to understand the world rationally continues, regardless."...

    Remember, modern science is cardinally focused on understanding the world through empirical evidence and predictive power, not mere "rationality"; that's what Descartes did.

    No, they don't. In fact the statement is borderline incoherent. See above.Xtrix

    The statement is coherent, you're just rejecting or not comprehending it. Allow me to paraphrase and split it up

    All scientists believe that we get less wrong now than in the past (or at least, what we got wrong in the past, we get less wrong today).

    Scientists believe that modern theories are generally more accurate and complete (less error prone) than the theories of their predecessors. At the very least, our ancient predecessors got many specific things wildly wrong for which we actually have reliable and accurate models (i.e: less wrong)).

    In a nut shell, we have more accurate and precise predictive power.

    Yes, if one thinks of the "progress" of science as akin to climbing a mountain or filling out a crossword puzzle -- as "accumulation" of some kind. True, that's how the history of science looked for nearly 300 years until Einstein, and I'm sure you'll find many who still think that way. But that doesn't mean we have to take it seriously.Xtrix

    Einstein did not overturn Newton... Can't stress this enough... It's not like Einstein's theories and proofs suddenly changed the reproducible results of centuries of repeated physical experiments.

    These deeper realities in theoretical physics do have the potential to revolutionize our understanding of how the world works at the quantum level (and how things like spacetime and matter emerge), but it wont actually "overturn" previously established scientific models unless they give us greater predictive power, nor do they affect the utility of existing models should they be improved upon.

    Even if we can model how matter emerges from quantum particle waves, it's going to be useless with respect to anticipating the motion of large scale masses through space, and we will still, and perhaps forever, default to the Newtonian approach plus the tweaks offered by GR, which yield ridiculously and stupendously accurate and reliable results.

    Just what I said. To take one example, quantum mechanics and relativity will doubtlessly in the future be either brought together or re-interpreted somehow, or subsumed under a newer theory. And so on forever, really. Much of all of this has to do with the questions we ask, the problems we face as human beings -- and that in turn is dependent on our values, our goals, our interests, etc.Xtrix

    You're looking at it backward actually. QM and GR are "in our face" phenomenon that we cannot deny. The next breakthrough will not overturn them, it will encompass them. It will explain how GR and QM can both be true from some other observed (probably speculative at first) reality.

    Depends on what "empirical tracks" are, and what field you're talking aboutXtrix

    The tracks are myriad. If you want a quick way to look at the epistemological strength of a scientific field, look at the reproducibility of its experimental evidence, and the scope and accuracy of its predictions.

    Well needless to say I don't believe any of that, as you know. If you made even a slight effort to understand by taking a few moments to think, instead of reacting, you'd see that fairly easily. In fact your apparent emotional reaction and frustration with all of this is in itself interesting.Xtrix

    I've been sensing a bit of an attitude from you as well... Curious...

    Normally my posts start out pretty dryly, and I end up reciprocating... Curiouser...
  • Zophie
    176
    Hi. Excuse me if this is somewhat obvious but it may be worth remembering "science" isn’t a single entity to be analyzed using identical systems following identical rules. There may only be one "true" reality of everything, but our current scientific understanding necessitates the deployment of different paradigms for different areas of research. It’s possible this may have something to do with the potentially irreconcilable disagreement I’m seeing here. Apologies in advance if I'm saying nothing new or interesting.

    Postpositivism, which prioritizes predictive power, is a typically physicalist approach marrying the formal and physical sciences. Constructivism-interpretivism is a more lenient approach suiting the cognitive and social sciences. To a postpositivist, most hypothetical links from φυσις to modern science would be implausible because we can’t conduct a survey collecting testimonials of dead people, and that’s just too bad. (Lol.) To a constructivist-interprivist, however, it’s possible to sufficiently ground a hypothesis by extracting common themes and standpoints in the literature. For φυσις, this may invoke the "natural elements" of Indo-European mythology as an effort to properly bookend an account and thereby make it robust enough to be considered scientific. But even if it’s given that mythology is early evidence of proto-science as I contend, the notion is still, clearly, highly tentative. I mention this because, judging from post histories, paradigms haven't been given much mention, though I personally think they bring a lot of clarifying power to any discussion. Hopefully that can be appreciated here to at least some degree.

    As for the question of φυσις being some kind of weird non-divine driving force of science, it may actually be a question of what one thinks science is supposed to do. If science tells us how, then φυσις is probably an antiquated and superstitious container of convenience which is probably no longer relevant. If science tells us why, though, then I’m afraid the spectre of φυσις is transformed into what are mysteriously now known as the "Laws of Nature" (not "Natural Laws"), which appear to serve as a kind of “known-unknown” foundation for coherent scientific explanation despite being.. somewhat ad hoc.
  • Mikie
    6.2k
    Here is where I get turned around. First you aver that scientists admit a god of nature as some kind of serious and relevant sentiment that can help us understand modern science (as if it is an operant world-view; as if it contextualizes the entirety of it)....VagabondSpectre

    I never stated anything about a "god of nature."

    But you actually are trying to say that modern science must be the same thing that the ancients were engaged in, because there is inquiry involved in both, and because there are some etymological relationships....VagabondSpectre

    No, because neither you nor I know what "modern science" is. We can't pinpoint when it begins. We can only speculate as to what makes it 'distinct' from any other rational inquiry. So far, its successes in technological advances and some kind of "method" has been offered. I don't find that very convincing.

    Yes, the Greeks were doing "science" in any meaningful sense of the term. They conducted experiments, they took careful observations, they theorized about how things worked, etc. Did they have laboratories, microscopes, telescopes, and particle accelerators? No. So obviously things have changed, and in that sense sure, it's very different. But is that what modern science is?

    It's just not so simple -- and who really cares, anyway? If Aristarchus wasn't doing "science," then all you've done is defined "science" as "something that happens now." That's fine. I just - again - don't find it convincing.

    I think I understand what you're trying to do: you are trying shed light on the inherent epistemological limitations (the doubts) of modern science by showing how it is similar to previous and falliable phases of human inquiry.VagabondSpectre

    I don't see why one would have to shed light on this -- it's a truism. Of course science is fallible. The human mind is fallible-- it has its scope and its limits. I'm not trying to shed light on that -- on the contrary I, along with everyone else, takes it for granted. Not very interesting.

    I've been quite clear, I believe, in what I'm trying to accomplish here. If it isn't clear to you, then just ask me -- so you don't have to guess. Not a problem. Because so far your track record is well below 50% accuracy.

    Science in its modern incarnation started with an admission of said uncertainty.VagabondSpectre

    When? With Descartes? Was Copernicus not a scientist then? Or, again, what of Aristarchus?

    I'm really not interested in trying to define science. Your attempts to do so have been all over the place, but have now apparently settled on "predictions" as the key feature. OK, in that case I'll repeat: the Greeks made predictions too. The most famous, as you know, was that Thales was able to predict an eclipse. Even if that's completely wrong, there's plenty of examples of predictions, of experiments, of observation, of theory formation and theory-testing, etc., among the ancients. Plenty of wrong views, too, and plenty of mistakes -- no doubt.

    I have no trouble with saying modern science is different in many respects with whatever the Greeks were doing. As I said before, it's undeniable that many things have changed. But when you look at what's going on, at its core, it seems like what we call "doing science" is actually something that's been with us (as human beings) for a long time indeed.

    What does this mean? Take mathematics, for example. The capacity for arithmetic is universal -- any child in any culture can learn it, if taught. But how many thousands of years did it simply lay dormant in the human mind? Or music, for that matter. Music is another universal, and yet it wasn't until very recently that it was standardized in any meaningful way around the time of Guido of Arezzo. Should we say music didn't exist prior to him? Or that "history" didn't exist prior to Herodotus? You could make that argument -- and many do -- but I don't see the motivation or point of doing so. It makes much more sense to say that these are human capacities, as is --- let's call it the "science-forming capacity." Much like language, its been there since humans evolved in their modern form 200K years ago or so.

    Again, that's not denying that things have radically changed or even that we haven't progressed (if one accepts the standards of progress).

    But your notion that science "progresses" is itself a picture that isn't really justified. In some ways it does, in others it doesn't. But in any case, the best scientists are well aware that theories today will morph and adapt in the future -- that's just basic. It's pure hubris to assume otherwise.
    — Xtrix

    I can basically defeat this sentiment merely by saying "computers". By what standard has modern science not progressed?
    VagabondSpectre

    By the same standard that it's "progressed," I suppose. Which is to say: no agreed upon standard. Are computers a progression? Are atomic bombs (especially if they wipe us out -- "progress"?) Maybe, maybe not. In the latter examples, if a goal is stay alive -- then that's a very definite REGRESSION.

    To be less ambiguous: "progress" is a value judgment, and a vague one at that. Have things changed a great deal? Absolutely. They continue to change. Someone in Gutenberg's time may have given "moveable type" as an example of progress instead of "computers." Who knows. But this view of "upward" movement is similar to the view of science as "accumulation," as if we're "getting somewhere." I'm not denying that possibility, in fact I think in many respects we have, but this determination depends on our goals, purposes, and human interests.

    The entire thread seems to sniff in this direction though... That science isn't so greatVagabondSpectre

    I can't help if you take it this way. I've never given the slightest indication that science "isn't so great."

    Remember, modern science is cardinally focused on understanding the world through empirical evidence and predictive power, not mere "rationality"; that's what Descartes did.VagabondSpectre

    You say this, and yet a moment earlier talked about "induction." Is logic and reason involved in "science" or not?

    I'm not using "rational" the way you're thinking -- apparently as something pertaining to "rationalism" (which is what textbooks of course say Descartes was).

    Of course science involves rational inquiry. That's a truism. Nothing complicated about that. The claim that "modern science is cardinally focused..." is so far totally unsupported. Says who? Thought experiments don't count? What about Brahe? Was his careful data collection not science? If it wasn't, then who needs "science" anyway?

    Stop trying to demarcate science. It's been tried over centuries by brighter minds than yours, and it's failed. Ignoring the literature on this and grasping for this or that "definition" is simply a waste of time. For every "key component" you mention, I can offer counterexamples. It's not because I'm a genius, it's because defining science by a "method" and thus trying to separate rational inquiry into "science" and "non-science" is a fruitless.

    Yes, if one thinks of the "progress" of science as akin to climbing a mountain or filling out a crossword puzzle -- as "accumulation" of some kind. True, that's how the history of science looked for nearly 300 years until Einstein, and I'm sure you'll find many who still think that way. But that doesn't mean we have to take it seriously.
    — Xtrix

    Einstein did not overturn Newton... Can't stress this enough...
    VagabondSpectre

    There's no need to "stress it" because I neither said it nor believe it. You simply missed the point -- again.

    To repeat: the very fact that Newtonian physics turned out to be "wrong" not in terms of calculation but in the bigger picture led to a remarkable re-evaluation of the history of science. See David Hilbert, et al.

    Just what I said. To take one example, quantum mechanics and relativity will doubtlessly in the future be either brought together or re-interpreted somehow, or subsumed under a newer theory. And so on forever, really. Much of all of this has to do with the questions we ask, the problems we face as human beings -- and that in turn is dependent on our values, our goals, our interests, etc.
    — Xtrix

    You're looking at it backward actually. QM and GR are "in our face" phenomenon that we cannot deny.
    VagabondSpectre

    That's a completely meaningless statement.

    Both are scientific theories. They're not "read off" from nature without any contribution of the thinking mind; there's nothing "backwards" about this.

    The next breakthrough will not overturn them, it will encompass them.VagabondSpectre

    Maybe. That's one possibility, as I mentioned above. They could also be completely overturned.

    I've been sensing a bit of an attitude from you as well... Curious...

    Normally my posts start out pretty dryly, and I end up reciprocating... Curiouser...
    VagabondSpectre

    Fair enough. If there's any frustration from me, it's only over the fact that this whole conversation is a digression that misses the point of this thread, but I'm (obviously) perfectly willing to have it, since I do mention "science" in the title.

    To be quite clear: I don't view "science" as completely separate from philosophy, and as I point out in the first post, it was initially called "natural philosophy." Newton and Galileo didn't consider themselves "scientists." Scientists, then and now, are trying to understand this "nature" (or universe), and so it's worth asking what "nature" means. That's the point here-- and I haven't really gotten started with that question yet. Tracing the history of the concept "nature" in the sense of "universe," or what's considered all that exists, sheds light on both how we see the world and how we define ourselves as human beings. Needless to say, this matters a great deal and has real consequences in the world around us -- and thus the fate of humanity, ultimately.

    That may sound grandiose, but think about it for a minute: does it matter what, say, political leaders of the world believe? Well if it influences their decision-making, then it certainly does. And those decisions matter; in fact you and I are living with them.


    As far as the philosophy of science: our observation and experimentation with nature -- i.e., dealing with it empirically -- is very important. I'm not disagreeing. Making predictions is very important. Falsifiability is important (as Poppers pointed out). Advances in technology is important. Formulating sensible explanatory theories is important. Etc., etc. All are relevant aspects of a general endeavor to understand the world.

    It's true I reject any sense of a "method" that clearly divides "science" from "non-science." People have tried to prove that such a method exists, and many still believe some algorithm or set of rules accounts for science's success -- the inductive method, ability to be falsified, predictiveness, etc. But I remain underwhelmed by these attempts, as there are always examples that simply don't fit --- and feel it's a pretty irrelevant question anyway. What good does it do?

    Regardless, people who identify as scientists, who go to school and study physics, chemistry, biology, geology, psychology, or medicine, do important work and operate, in their interchange with the world, within a certain set of beliefs and assumptions -- otherwise there would be no field and no science whatsoever. That set of beliefs, assumptions, and axioms is what interests me here, especially when it comes to ontology: what it means to "be," what it means to be "human," etc. This is all defined in a certain way: for example as matter, as energy, as natural law, as the "physical," as nature, as substance, as "objects," etc. etc. It's there I want to focus:

    It turns out that φῠ́σῐς (phusis) is the basis for "physical." So the idea of the physical world and the natural world are ultimately based on Greek and Latin concepts, respectively.

    So the question "What is 'nature'?" ends up leading to a more fundamental question: "What is the 'physical'?" and that ultimately resides in the etymology of φῠ́σῐς and, finally, in the origins of Western thought: Greek thought.
    Xtrix
  • Mikie
    6.2k
    Hi. Excuse me if this is somewhat obvious but it may be worth remembering "science" isn’t a single entity to be analyzed using identical systems following identical rules. There may only be one "true" reality of everything, but our current scientific understanding necessitates the deployment of different paradigms for different areas of research.Zophie

    That's a good point.

    It’s possible this may have something to do with the potentially irreconcilable disagreement I’m seeing here. Apologies in advance if I'm saying nothing new or interesting.Zophie

    Your point is a good one but I'm missing how that pertains to what we're discussing, which is at this point whether or not "science" is a distinct activity, defined by and owing its success to a special "inductive method." I maintain there is no such method, Vagabond is arguing in favor of one. There being vastly different fields in science with their own set of background premises, while true, doesn't shed much light on our disagreement.

    Postpositivism, which prioritizes predictive power, is a typically physicalist approach marrying the formal and physical sciences. Constructivism-interpretivism is a more lenient approach suiting the cognitive and social sciences. To a postpositivist, most hypothetical links from φυσις to modern science would be implausible because we can’t conduct a survey collecting testimonials of dead people, and that’s just too bad. (Lol.) To a constructivist-interprivist, however, it’s possible to sufficiently ground a hypothesis by extracting common themes and standpoints in the literature. For φυσις, this may invoke the "natural elements" of Indo-European mythology as an effort to properly bookend an account and thereby make it robust enough to be considered scientific. But even if it’s given that mythology is early evidence of proto-science as I contend, the notion is still, clearly, highly tentative. I mention this because, judging from post histories, paradigms haven't been given much mention, though I personally think they bring a lot of clarifying power to any discussion. Hopefully that can be appreciated here to at least some degree.Zophie

    I'm failing to see the relevance. What "hypothetical links" do you mean? I'm mainly talking etymology, with the question of the meaning of being as a guide (a la Heidegger).

    I agree that paradigms are an important concept; I like Kuhn a lot too. But I'm not seeing the connection to this thread and the analysis of the meaning of "nature" and phusis.

    As for the question of φυσις being some kind of weird non-divine driving force of science, it may actually be a question of what one thinks science is supposed to do.Zophie

    Who said anything about a "driving force"? Not I. Also, why the characterization of phusis and "weird non-divine force"? There's nothing weird about it. Nor is it weirder than a "void" or "force field" or "substance" or "God" (in Spinoza's sense) for that matter.

    If science tells us how, then φυσις is probably an antiquated and superstitious container of convenience which is probably no longer relevant. If science tells us why, though, then I’m afraid the spectre of φυσις is transformed into what are mysteriously now known as the "Laws of Nature" (not "Natural Laws"), which appear to serve as a kind of “known-unknown” foundation for coherent scientific explanation despite being.. somewhat ad hoc.Zophie

    There's nothing superstitious about it.

    As for being antiquated, yes it is -- since it was the word for the sense of being in Greek, it's from antiquity and thus antiquated. But as with most things Greek, this has dominated Western thought ever since.
  • VagabondSpectre
    1.9k
    I never stated anything about a "god of nature."Xtrix

    I don't mean a god over nature, I mean god from nature; the god of nature... It's what you said in your opening post so I'm not sure why you're not interpreting this correctly.

    The first paragraph of your OP paints the picture:

    Most of today's scientists will claim to assume "naturalism" in their endeavors. Someone famous once said that "I believe in God, I just spell it n-a-t-u-r-e." I've heard this a lot from the likes of Sagan, Dennett, Dawkins, Gould, and many others -- especially when contrasting their views with religious views or in reaction to claims that science is "just another religion."

    You stated this and then launched headlong into a historical analysis, under the allusion that your findings with respect to ancient naturalism can usefully color our understanding of modern science:

    It's worth remembering that science was simply "natural philosophy" in Descartes' day, Newton's day and Kant's day. This framework and its interpretation of the empirical world dominates every other understanding, in today's world, including the Christian account (or any other religious perspective, really). Therefore it's important to ask: what was (and is) this philosophy of nature? What is the basis of its interpretation of all that we can know through our senses and our reason?Xtrix

    No, because neither you nor I know what "modern science" is. We can't pinpoint when it begins. We can only speculate as to what makes it 'distinct' from any other rational inquiry. So far, its successes in technological advances and some kind of "method" has been offered. I don't find that very convincing.Xtrix

    I have no trouble with saying modern science is different in many respects with whatever the Greeks were doing. As I said before, it's undeniable that many things have changed. But when you look at what's going on, at its core, it seems like what we call "doing science" is actually something that's been with us (as human beings) for a long time indeed.Xtrix

    I'm having a hard time comprehending which of the above positions you actually occupy.

    Do we not know what modern science is, and therefore cannot say how it differs from what ancient Greeks were doing? Or are there obvious differences between what ancient Greeks were doing and modern science? If so, what are those obvious differences? (hint: predictive power and a focus on experimental methodology).

    If you want to try and get at *the very core of human inquiry and knowledge*, then you have no reason to refer to the problem of induction as irrelevant. The thing we and the ancients share is that we both lived or live in worlds that appear to have causal consistency. We observe things, use those observations to formulate an idea or an action, and then we observe the effects of those ideas and actions. In general, we want our actions to create more desirable observations. The only real signal we have to refine our ideas and actions is the observable results of those actions. The ancients kinda knew this, but they did not seem to realize that instead of focusing on how elegant an idea sounds in and of itself (or how persuasive it may be to the rational mind), we should be forced to reject it if experimental evidence controverts it, and beyond this, that we can never actually test the validity of such speculative ideas unless they can actually generate predictions that can be tested.

    With these last two sentences, we have a robust definition of the scope of science (being concerned with observable phenomenon and falsifiable models) that does depart from the more full blown realm of philosophical inquiry that the ancients were engaged it. It's a drastic departure from the focus of those ontic schools that instead presupposed some anthropically biased/pleasing framework. Modern science doesn't even require the assumption that nature is consistent; inductively it appears to have consistency, and if one day causal consistency fails, so be it). We hope, though, that fundamental laws will remain consistent if only so that our scientific models remain useful (and so that our world doesn't fall apart).

    It's just not so simple -- and who really cares, anyway?Xtrix

    I thought you wanted to comment on modern science via commenting on ancient science. Am I wrong?

    You're equivocating between the epistemological foundations of modern science (it's the inductive method), and other schools which are less strict.


    "Remember, modern science is cardinally focused on understanding the world through empirical evidence and predictive power, not mere "rationality"; that's what Descartes did." -Vagabond


    You say this, and yet a moment earlier talked about "induction." Is logic and reason involved in "science" or not?
    Xtrix

    Gaining knowledge using predictive power as a confidence signal IS induction. Inductive arguments are inherently statistical (they are either strong or weak); we gather evidence, and likelihoods and reliabilities (patterns in observations) give us insight about how to make reliable predictions. Fundamentally, all knowledge is gathered this way, but that's another discussion entirely.

    So when I say "science relies on the inductive method, not mere rationality", I'm actually pointing to the specific form of "logic" (induction) that scientific proofs require as their literal standard for truth and knowledge.
  • VagabondSpectre
    1.9k
    To repeat: the very fact that Newtonian physics turned out to be "wrong" not in terms of calculation but in the bigger picture led to a remarkable re-evaluation of the history of science. See David Hilbert, et al.Xtrix

    What was the picture being described by the laws of motion?

    There is not one (they're generalized formulas)... Only speculations we derive from it at our own epistemological risk...

    The kind of materialistic assumptions that newton's laws ostensibly implies were a whole different brand of claim. Newton said: I can predict the movement of this thing through space over time; and he could, with wonderful accuracy (the fact that the laws of motion do so well and are themselves so elegant is an interesting subject for discussion because it generates claims of intelligent design, but they aren't scientific claims (that is to say, they cannot be falsified)).

    Einstein came along and said "if we view this space thing and this time thing as this other thing, then we can increase the accuracy of our predictions about our future observations of "things" (I'm using scare quotes because these fundamental discoveries were and are still valid only up to their testability and accuracy; they do not inherently contain claims about necessarily deeply hidden truths or ontological assumptions.

    Quantum mechanics makes this point clear; we inductively gather that GR and the Newtonian scale models are actually emergent phenomenon from a very complex, vast, and fast moving multiverse of strange tiny particle-waves. Given this stunning evident truth, we might react and say "well i guess all the other stuff is just bull-shit", and you would be right only in so far as people have drawn inappropriate assumptions from scientific models and knowledge in the first place.

    Scientific models are inherently a heuristic or stochastic approach to knowledge; they are not guaranteed to be optimal in terms of absolute truthiness in description (far from it in fact), instead they only promise predictive power. In this sense (and considering QM) models describing phenomenon above the atomic scale are necessarily simplifications. They're not actual truth, they're only reliable models.

    So when Newtonian physics turned out to be "wrong", what you should actually be saying is that we found a more accurate/reliable/robust model which encompasses the Newtonian model. The scientific truthiness of Newtonian laws of motion actually don't change with subsequent discoveries, they're still just as reliable as they were before (and often still used due to the special circumstances that demand GR level precision).

    Once we realize that the truth metric of science is not the elegance or validity of "why" like explanations, but instead high precision and accuracy in experimental predictive power, it re-contextualizes the whole shebang.
  • VagabondSpectre
    1.9k
    The claim that "modern science is cardinally focused..." is so far totally unsupported. Says who?Xtrix

    Interesting question, but appealing to authority is not scientifically sound.

    We would have to do a random sampling of active or historical scientific inquiries, and then do quantitative and statistical analysis to determine whether or not they were heavier on evidence gathering and predictive modeling, or heavier on making unfalsifiable hypotheses.

    Once we have gathered and preprocessed the data, we could make a null hypothesis like "we expect to see an even distribution of the predictive model approach vs the untested hypothesis generating approach". Then when we actually crunch the numbers, assuming our sample is sufficiently large, if we see large deviation in one direction or the other, we then have a potentially significant signal that tells which direction to lean regarding the claim "modern science is cardinally focused on understanding the world through empirical evidence and predictive power, not mere "rationality"; (what Descartes did)".

    You might want to say "correlation is not causation" and that would indeed be very astute. We could take our analysis to completion by gathering additional data of factors which we think might impact cardinal focus of individual scientific inquiries. Using something like muti-variate regression analysis, we could potentially generate a model between the relationships of circumstantial factors and the cardinal focus of scientific inquiry in general. We could then use these relationships to create a statistical model that tells us what the most likely cardinal focus of a given scientific inquiry is if we are given the specific factors that we checked in our analysis. If our model generates predictions with very high or useful precision and accuracy then we call it robust.

    Even though we're obviously not addressing the critical and deeply hidden truths that all philosophers yearn to masticate, nor are we necessarily offering sensical explanations of why scientific inquiry varies from case to case: we're successfully and humbly generating a model that can reliably give us predictions of sufficient accuracy. Nothing more, nothing less.
  • VagabondSpectre
    1.9k
    Stop trying to demarcate scienceXtrix

    Stop trying to couple it with non-science.
  • VagabondSpectre
    1.9k
    That's a completely meaningless statement.

    Both are scientific theories. They're not "read off" from nature without any contribution of the thinking mind; there's nothing "backwards" about this.
    Xtrix

    The experimental evidence is in our face phenomenon... The experimental evidence is what forced people to reluctantly accept GR, and so too the story goes with QM. Our thinking minds tend to want to reject these things as spooky and unintuitive nonsense, and it is only the experimental evidence that manages to persuade us in the end.
  • Mikie
    6.2k
    I never stated anything about a "god of nature."
    — Xtrix

    I don't mean a god over nature, I mean god from nature; the god of nature... It's what you said in your opening post so I'm not sure why you're not interpreting this correctly.
    VagabondSpectre

    From nature, "of nature"? I'm not being deliberately contrarian here, I just really don't know what you mean. My only point in mentioning the quote about "I spell God 'nature'" was to emphasize the point that many scientists assume (reasonably) a kind of naturalism when dealing with the world. That wasn't meant to imply a kind of "God of nature." But I take your point, perhaps it was ambiguous.

    It's worth remembering that science was simply "natural philosophy" in Descartes' day, Newton's day and Kant's day. This framework and its interpretation of the empirical world dominates every other understanding, in today's world, including the Christian account (or any other religious perspective, really). Therefore it's important to ask: what was (and is) this philosophy of nature? What is the basis of its interpretation of all that we can know through our senses and our reason?
    — Xtrix

    No, because neither you nor I know what "modern science" is. We can't pinpoint when it begins. We can only speculate as to what makes it 'distinct' from any other rational inquiry. So far, its successes in technological advances and some kind of "method" has been offered. I don't find that very convincing.
    — Xtrix

    I have no trouble with saying modern science is different in many respects with whatever the Greeks were doing. As I said before, it's undeniable that many things have changed. But when you look at what's going on, at its core, it seems like what we call "doing science" is actually something that's been with us (as human beings) for a long time indeed.
    — Xtrix

    I'm having a hard time comprehending which of the above positions you actually occupy.
    VagabondSpectre

    I don't see any conflict.

    1) What is now called science was once called "natural philosophy." Nothing controversial about that.

    2) When "science" (modern science) really emerges is quite fuzzy, and distinguishing it from other rational inquiries by appeals to a special "method" isn't convincing to me.

    3) However we define "modern science" -- perhaps whatever academic research, experiments, published articles, etc. is happening -- it seems that this activity is in many ways different from Greek "science" in terms not of its core (inquiry) but of technology (e.g., microscopes) and sheer scale (many more people engaged in this activity consciously and cooperatively).

    So I occupy one position really.

    Do we not know what modern science is, and therefore cannot say how it differs from what ancient Greeks were doing?VagabondSpectre

    Yes.

    Or are there obvious differences between what ancient Greeks were doing and modern science?VagabondSpectre

    At the periphery there are differences, sure (microscopes, telescopes, computers, a division of labor in research, and so on). See above.

    But as trying to understand the world using reason, experiment, observation (empirical data), etc. -- no, there's no difference in my view between humans doing this now and humans doing it then. All it takes to understand this is to see that these people weren't sub-human barbarians or "primitive" at all, and to know a little history. (I keep bringing up Aristarchus, for example, for a reason: because you continue to avoid that point; was he "doing" science or not?)

    If so, what are those obvious differences? (hint: predictive power and a focus on experimental methodology).VagabondSpectre

    Not all inquiry is predictive, and sometimes there's little actual experimentation. So while this is a fine rule of thumb, perhaps, it's hardly exhaustive or definitive. It's also way too simple, as I've pointed out before, and kind of reeks of hubris.

    But even if we accept your definition, I've given examples of the Greeks (and Muslims, and Persians, and Babylonians, and Christians, etc) meeting these criteria. Yet you reject that as science because they're "primitive, superstitious" people. Can't have it both ways.

    Either the Greeks were doing science, or they weren't doing science and then neither are we (in which case we need another definition in order to exclude the Greeks).

    If you want to try and get at *the very core of human inquiry and knowledge*, then you have no reason to refer to the problem of induction as irrelevant. The thing we and the ancients share is that we both lived or live in worlds that appear to have causal consistency. We observe things, use those observations to formulate an idea or an action, and then we observe the effects of those ideas and actions. In general, we want our actions to create more desirable observations. The only real signal we have to refine our ideas and actions is the observable results of those actions. The ancients kinda knew this, but they did not seem to realize that instead of focusing on how elegant an idea sounds in and of itself (or how persuasive it may be to the rational mind), we should be forced to reject it if experimental evidence controverts it, and beyond this, that we can never actually test the validity of such speculative ideas unless they can actually generate predictions that can be tested.VagabondSpectre

    Yes, causality is important.

    Your characterization of the "ancients" is simply sophomoric, I'm afraid. And you've repeatedly avoided clear examples that outright refute this caricature.

    With these last two sentences, we have a robust definition of the scope of science (being concerned with observable phenomenon and falsifiable models) that does depart from the more full blown realm of philosophical inquiry that the ancients were engaged it. It's a drastic departure from the focus of those ontic schools that instead presupposed some anthropically biased/pleasing framework.VagabondSpectre

    Again, it's not so robust. But leaving that aside, what "ontic schools" are you talking about? And please don't give me a superficial analysis like "Thales believed the world was made of water" or something like that. I'm hoping you're more familiar with the presocratics than that.

    It's just not so simple -- and who really cares, anyway?
    — Xtrix

    I thought you wanted to comment on modern science via commenting on ancient science. Am I wrong?
    VagabondSpectre

    Yes.

    You're equivocating between the epistemological foundations of modern science (it's the inductive method), and other schools which are less strict.VagabondSpectre

    So you're in the camp of still believing in some special "inductive method." That's fine. I'm very familiar with those in your camp -- there have been plenty since Bacon. But there's no reason to take it so seriously, especially as there is plenty that goes on outside of such a "method" thus making it fairly meaningless (in my view).

    I've given plenty of examples at this point, which you continue to ignore, so I'll take that to mean you have some attachment to belief in such a dogma about a special "method" and will gladly let you go on holding it. Again, this wasn't the purpose of this thread anyway.

    I suggest Newton's Apple and Other Myths about Science for a longer, more thorough disquisition about this from a historian of science.

    Gaining knowledge using predictive power as a confidence signal IS induction.VagabondSpectre

    That's not inductive reasoning. Inductive logic has little to do with "predictive power." You're just confused here, I'm afraid.

    But regardless, I'm glad you admit that reason and logic is used in science.

    So when I say "science relies on the inductive method, not mere rationality", I'm actually pointing to the specific form of "logic" (induction) that scientific proofs require as their literal standard for truth and knowledge.VagabondSpectre

    No, you contrasted "rationality" by conflating it with "rationalism" (hence why you mentioned Descartes) which is completely wrong. Inductive reasoning already assumes reason (it's right there in the word), and hence rationality - ratio is Latin, which translates as "reason."

    So yes, dealing with the world "rationally" is a human activity, part of an attempt to understand things using reason. That's philosophy, that's science, that's everything in between when we're not completely under the direction of emotion, whim, instinct, superstition, etc. If you want to go on believing that only the parts of this activity that check off a DSM-like list should be considered "science," all evidence to the contrary, you're welcome to.
  • Mikie
    6.2k
    To repeat: the very fact that Newtonian physics turned out to be "wrong" not in terms of calculation but in the bigger picture led to a remarkable re-evaluation of the history of science. See David Hilbert, et al.
    — Xtrix

    What was the picture being described by the laws of motion?
    VagabondSpectre

    Not one of a 4-dimensional spacetime or of non-Euclidean geometry. Doesn't really make it "wrong," I suppose, but less explanatory than relativity.

    So when Newtonian physics turned out to be "wrong", what you should actually be saying is that we found a more accurate/reliable/robust model which encompasses the Newtonian model.VagabondSpectre

    Correct. See above.

    The claim that "modern science is cardinally focused..." is so far totally unsupported. Says who?
    — Xtrix

    Interesting question, but appealing to authority is not scientifically sound.
    VagabondSpectre

    Sure. But I'm not appealing to an authority -- I'm asking you to, so that he or she will provide evidence. If you can, I'm all ears. But so far you haven't.

    We would have to do a random sampling of active or historical scientific inquiries, and then do quantitative and statistical analysis to determine whether or not they were heavier on evidence gathering and predictive modeling, or heavier on making unfalsifiable hypotheses.

    Once we have gathered and preprocessed the data, we could make a null hypothesis like "we expect to see an even distribution of the predictive model approach vs the untested hypothesis generating approach". Then when we actually crunch the numbers, assuming our sample is sufficiently large, if we see large deviation in one direction or the other, we then have a potentially significant signal that tells which direction to lean regarding the claim "modern science is cardinally focused on understanding the world through empirical evidence and predictive power, not mere "rationality"; (what Descartes did)".

    You might want to say "correlation is not causation" and that would indeed be very astute. We could take our analysis to completion by gathering additional data of factors which we think might impact cardinal focus of individual scientific inquiries. Using something like muti-variate regression analysis, we could potentially generate a model between the relationships of circumstantial factors and the cardinal focus of scientific inquiry in general. We could then use these relationships to create a statistical model that tells us what the most likely cardinal focus of a given scientific inquiry is if we are given the specific factors that we checked in our analysis. If our model generates predictions with very high or useful precision and accuracy then we call it robust.
    VagabondSpectre

    Let me know when you conduct this experiment. I wish you the best of luck, but I won't hold my breath. Personally I think it's a waste of time. But in any case, the point stands: there's no evidence for your claim. So why say it? That's not scientifically sound either.

    Stop trying to demarcate science
    — Xtrix

    Stop trying to couple it with non-science.
    VagabondSpectre

    I haven't once said anything remotely like that, because before we can "couple" non-science with "science," we have to know what "science" is. No one can offer a definition that shows Aristarchus wasn't doing science but Galileo was, for example, so who cares?

    You, on the other hand, have repeatedly tried to demarcate science, ignoring evidence that doesn't fit. Also not scientifically sound. I can make guesses as to why this is, psychologically, but otherwise it's not very interesting to me.

    That's a completely meaningless statement.

    Both are scientific theories. They're not "read off" from nature without any contribution of the thinking mind; there's nothing "backwards" about this.
    — Xtrix

    The experimental evidence is in our face phenomenon...
    VagabondSpectre

    That's not what you said. You said:

    You're looking at it backward actually. QM and GR are "in our face" phenomenon that we cannot deny.VagabondSpectre

    So quantum mechanics and general relativity are "experimental evidence" now? That's completely meaningless as well.

    Our thinking minds tend to want to reject these things as spooky and unintuitive nonsense, and it is only the experimental evidence that manages to persuade us in the end.VagabondSpectre

    "Us" being human beings...also with thinking minds.
  • jacksonsprat22
    99
    It turns out that φῠ́σῐς (phusis) is the basis for "physical." So the idea of the physical world and the natural world are ultimately based on Greek and Latin concepts, respectively.Xtrix

    phusis means growth. So, so nature is a process of growth--as well as decay.
  • Mikie
    6.2k


    You're exactly right: phusis is much more a matter of "growth" -- as "blooming" for example -- in ancient Greece.

    I don't agree with the second part of your statement, however.
  • jacksonsprat22
    99


    Everything living decays and eventually dies. What do you disagree with?
  • Mikie
    6.2k


    Yes, that's true, but that's not quite the sense that phusis means. It doesn't necessarily mean a literal "development" of a plant, say. It's more the blooming, the emergence, of a thing.
  • jacksonsprat22
    99


    I don't think we are disagreeing. But, again, for Aristotle it does literally mean the growth of a plant.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.