• onomatomanic
    29
    1. Am I correct about what I said about Newton? Had his measurements for mass and distance been more precise (had more decimal places) than what was available to him, he would've realized that the formula was wrong.TheMadFool
    Unlikely, I'd say.

    What one learns in school about the Scientific Method is that when a new practical result turns out to contradict the old theoretical system, what scientists do is throw away the old system and replace it with a new one.

    What happens in the real world is a lot messier, because there are always a bunch of possible reasons for such discrepancies. Maybe the result was a fluke. Maybe there was a systematic error in how it was obtained. Maybe it doesn't show us a single effect, but how various effects interact, and the old theory works fine for the primary one but doesn't apply to each of the secondary ones, or one of the theories that do apply to the secondary ones is the one that's dodgy, or some of those other theories don't even exist yet because this is the first time this effect has shown up. Or, or, or.

    For an illustration, imagine aliens living on our Moon using a high-precision optical telescope to observe a cannon firing on Earth, and noticing that the cannonball's trajectory doesn't quite match Newtonian predictions. Do they need to invent Relativity? A far likelier explanation is that they've not properly accounted for atmospheric effects like drag, given that their Lunar environment doesn't have much of an atmosphere.

    For an example, have a look at Pioneer anomaly @ wikipedia.

    So that's one good reason not to give up on a theory at the first sign of trouble. Another one is that until there's a new theory,.you use the old one, whether or not you know it to be flawed. In the traditional interpretation, in which theories can be true or false, that's a bit distasteful - but in the modern interpretation, in which models can only be better or worse approximations, there's nothing wrong with it.

    With all that in mind, what would Newton have done with those high-precision measurements? It's not like he was in a position to go ahead and come up with Relativity himself: None of the theoretical groundwork that Einstein built on was in place at the time, not least because the bulk of it was ultimately built on Newtonian foundations in turn. Reasonably, it would have made little difference, other than to make him suspect that some other effect, like the atmospheric drag in my illustration or the thermal recoil in the Pioneer example, comes into play at some point.

    2. Why can't the output of a formula not be more precise than the input?TheMadFool
    Did you not like my eariler explanation?
    The general proof again needs statistical methods, no doubt. For the specific case of a multiplication like F = ma, though, just think of the inputs as the length and width of a rectangle, and the output as its area. If the length is known perfectly, and the width has an uncertainty of 10%, say, then the area will have an uncertainty of 10% as well. Vice versa, if the length has the 10% uncertainty, and the width is known perfectly, same result. So when both the length and the width have a 10% uncertainty, it should be clear that the area now has an uncertainty of more than 10%.onomatomanic

    What is of concern to me is why an entirely new model needs to be built from scratch simply to explain a more precise measurement if that is what's actually going on?TheMadFool
    Part of the problem may be that you're thinking in terms of individual measurements. Think in terms of datasets instead:

    Y6JfhtE.png

    The upper dataset is low-precision, and can be "explained" as the blue line, which is straight. The lower dataset is high-precision, and must be explained as the green line, which is curved. The old model was quite good, in the sense that it predicts parameters (offset and slope) for the straight line that put it in the right place. But straight lines is all it can do, so it's not good enough for the higher-precision data. The new model is better, in the sense that it can do what the old model can do, plus predicting curvature parameters. Still, the old model remains better in the sense that it's less cumbersome to work with, so it makes sense to keep using it whenever either the line doesn't curve or the needed precision isn't high. (Hm, that actually worked out even nicer than I anticipated!)
  • TheMadFool
    13.8k


    Thanks. Reality is hardly ever cooperative enough to fit neatly into our equations. There's always some wrinkles that we just have to ignore. Nevertheless. an approximation - something - is better than nothing.

    I'd like you to go over the following:

    Take the Parker Solar probe. It's speed = 111 km/s

    1. Newtonian velocity addition: u = u' + v

    If two Parker Solar probes were travelling towards each other, their relative velocity, R1 = 111 + 111 = 222 km/s

    2. Relativistic velocity addition:

    Plugging in the numbers, their relative velocity, R2 = 221.9999696082 km/s

    km/s

    Salient points

    (i) The relative velocity calculated in a Newtonian way and that calculated in a relativistic way differ but we could and do say that the ever so minute difference is negligible. That's the reason why Newton is still in the game in this scientific epoch of Einsteinian relativity. I'm sure you'll agree.

    (ii) If significant digits matter, as you say they do, R2 should be rounded to 222 km/s (dropping the "false" precision of 0.9999696082) If we do that, relativistic velocity addition becomes, in a certain sense, meaningless. That, to me, doesn't add up. After all, Einstein's theory completely rests on that additional precision represented by 0.9999696082.

    Conclusion

    Your claim that an output of a physics formula can't be more precise than the inputs doesn't seem to hold water. As seen above, the precision in the output, higher though they may be compared to the inputs, makes a huge difference, requring an entirely different model/theory.
  • onomatomanic
    29

    Okay, I think I see now what you're grappling with. The point is this one:

    A) Low-precision version of the experiment

    Data
    • v1 ~ 111.110 km/s (speed of the first probe, as measured by a stationary observer)
    • v2 ~ 111.113 km/s (speed of the second probe, ditto)
    • v12 ~ 222.222 km/s (speed of the first probe, as measured by the second probe)
    • v21 ~ 222.219 km/s (ditto, vice versa)

    Theory
    • vo = v1+v2 ~ 222.223 km/s (old model)
    • vn = (v1+v2) / (1 + v1v2/c^2) ~ 222.223 km/s (new model)

    The measurement tools used in this version are precise to a few m/s, which shows up as noise at the level of the 6th sigfig. Using more sigfigs in the computations would be pointless and misleading. The measured values and those derived from the old and new models are all close enough to each other to be considered identical. We've simply confirmed both models, lacking the power to discriminate between them.

    B) High-precision version of the experiment

    Data
    • v1 ~ 111.111114 km/s
    • v2 ~ 111.111112 km/s
    • v12 ~ 222.222198 km/s
    • v21 ~ 222.222194 km/s

    Theory
    • vo ~ 222.222226 km/s
    • vn = 222.222196 km/s

    Now we're using tools precise to a few mm/s, and so increase our working precision to 9 sigfigs. This extra precision is what allows us to say that there is a non-negligible difference (~30 mm/s) between the predictions made by the old and new models, and to meaningfully compare the experimental data with either one. The data disagrees with the old and agrees with the new model, which is strong confirmation of the latter.

    If you're still not quite comfortable with sigfigs, remember that they're merely a shorthand for how much error there is in a value. Maybe the readout of the low-precision tool uses 9 figures, and gave us v1 as "111,109.876 m/s". There's nothing wrong with reporting that as "111.109876 km/s, with a margin of error of 3 m/s", say. It's just more verbose and "not the done thing" in this context.

    Happy? :)
  • TheMadFool
    13.8k
    I don't think it's got anything to do with experimental (read instrumental) precision. The difference between Newton and Einstein, their theories to be "precise", manifests as differences in the precision of the outputs of the respective formulae of Newtonian velocity addition and relativistic velocity addition. You'd miss it completely if you maintain that significant digits preclude higher precision in the output than in the inputs.

    I wonder what Newton and Einstein have to do with happiness, my happiness to be precise. Curious but definitely worth exploring. Thanks.
  • onomatomanic
    29
    The difference between Newton and Einstein, their theories to be "precise", manifests as differences in the precision of the outputs of the respective formulae of Newtonian velocity addition and relativistic velocity addition.TheMadFool
    Agreed, but with reservations. We can "parametrise" the speed summation equation like this in general:

    v = gamma * (v1+v2)

    According to Newton, gamma = 1. According to Einstein, gamma = 1 / (1 + v1v2/c^2). It's instructive to consider how Einstein's expression behaves as v1 and v2 approach 0 on the one hand - approaches the Newtonian limit - and the speed of light on the other hand - approaches 1/2, which then keeps v from ever exceeding c.

    And if one thinks of the Newtonian, constant value as an approximation, either of the Relativisitic expression or of reality, then this introduces an imprecision into the output of the equation that is disconnected from the imprecision of the inputs of the equation.

    This, I believe, is not how physicists typically do think about it though. The reason being that plenty of physical models are explicitly constructed like that, whereas in this case it would be more of a retcon. More importantly, to be considered sound, those models must themselves supply a means of estimating the magnitude of the imprecision they contain. For Newton, you have to step outside the model to come up with such an estimate.

    You'd miss it completely if you maintain that significant digits preclude higher precision in the output than in the inputs.TheMadFool
    Precisely. In F = m*a, the imprecision in F is the combined imprecision in m and a, both of which need to be measured. In v = gamma * (v1+v2), the imprecision in v is the combined imprecision from taking gamma to be a constant and from the straight summation of v1 and v2, which again need to be measured. The only way not to "miss it completely" is for the parametric contribution to be the dominant one, which in practice means either Relativistically high speeds, or high precision in measuring those speeds, or ideally both.
  • TheMadFool
    13.8k
    All I'm saying is that the difference between relativistic velocity addition and Newtonian velocity addition manifests as a precision matter unless it is not, in which case what's going on, may I ask?
  • onomatomanic
    29
    Re-reading the recent posts, I think any remaining confusion comes down to theory versus application, more than anything else. The concept of "precision" comes into it on both those levels, and it means fundamentally the same thing on both of them - but what it means specifically depends on the specific context.

    To illustrate, let's consider everyone's favourite thought experiment, flipping a coin.

    Theory: The simplest model, let's label it "Alpha", says that there are only two outcomes, heads and tails, and that they have the same probability, Ph = Pt = 50%. Well, actually, there is a third outcome, in which the coin balances on its rim. So in model "Bravo", we treat the coin as a cylinder with radius R and thickness T, and say that the probability for that third outcome depends on those new inputs, Pr = f(R, T), and that the two original outcomes remain equally likely, Ph = Pt = (100% - Pr)/2. But actually, a cylinder has at least two further equilibrium positions, in which it balances on a point along one of the lines at which the rim and the faces meet. So in model "Charlie"...

    Application: Flip a coin, repeat N times, count how often each outcome occurs. The ratio Nh/N measures the probability Ph for heads, et cetera.

    Now, which model is more precise, Alpha or Bravo? A case can be made either way. Alpha predicts Ph to be 50%, which is perfectly precise in the sense that no source of imprecision is included in this model. It's not 0.5 precise to 1 sigfig, or 0.500 precise to 3 sigfigs, but 1/2, the ratio of two integers.

    Bravo, by contrast, expresses the probabilites in terms of physical properties that have to be measured. Those measurements are necessarily imprecise, and because imprecise inputs yield imprecise outputs, this model's numerical predictions cannot be perfectly precise. Bravo is a less precise model than Alpha, in this sense.

    However, treating the coin as a three-dimensional cylinder with thickness T is closer to reality than treating it as a two-dimensional disk with thickness zero. So Bravo can be thought of as approximating reality, and Alpha can be thought of as approximating Bravo, for a typical coin. Being only approximations, neither prediction should be considered precise, but it's reasonable to expect Bravo to be less imprecise than Alpha, in that sense.

    On the applied side, how precise are those measured probabilities? For one thing, a ratio like Nh/N isn't quite the same as that 1/2 above, because the numerator and denominator aren't integers in quite the same sense. As N gets large, miscounting gets inevitable, so a result like 12345/23456 shouldn't be thought of as perfectly precise any longer. If we estimate the uncertainty to be on the order of 100, say, we can employ scientific notation to write that as (1.23*10^4)/(2.35*10^4) to make that point.

    For another thing, by design, this is about chance, and so there's always a chance the measured probabilities won't agree with the theoretical predictions regardless of whether the model is good or bad. For N=2, there're four simple outcomes - heads then heads again, heads then tails, ... - and half of them are best explained by a model that says "the coin keeps doing the same thing". Fortunately, such flukes get less likely as N gets large - unfortunately, that means that measurements can't avoid both types of imprecision at once.

    TLDR, lots of stuff may be thought of as imprecision, and doing so may provide little insight.
  • TheMadFool
    13.8k


    I'm unable to tell whether the extra digits in relativistic velocity addition compared to Newtonian velocity addition is a question of accuracy or precision.

    The calculated velocity has to be measured for confirmation of either theories (Newton's & Einstein's). In other words, the deciding factor is a speedometer's precision and accuracy.

    Suppose the actual velocity is 2.0189 m/s

    The speedometer is both accurate and precise.

    It measures 2.0189 m/s

    Newtonian velocity addition says the velocity should be ?

    Relativistic velocity addition says the velocity should be ?
  • onomatomanic
    29
    The speedometer is both accurate and precise.TheMadFool

    In a thought experiment, you can have such a thing as a perfect speedometer, and use it to perfectly determine relative speeds, and use those to test models against each other, as long as their predictions differ at all.

    In the real world, a speedometer can't be perfect, only better or worse than another speedometer. To be able to test models against each other, their predictions need to differ by enough to overcome those imperfections.

    Suppose the actual velocity is [...]TheMadFool
    In the real world, there's no point in supposing such a thing, because the only way we can find out is to meaure it. In a thought experiment, there may be a point - but thought experiments can't confirm theories, only falsify theories hypotheses that are internally inconsistent.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.