My objection was that probability is not the same thing as epistemic uncertainty. — aletheist
Out of curiosity, how do you deal with ontic uncertainty? Do you treat vagueness and propensity as elements of reality? Would you go as far as extending the principle of indifference to nature itself?
The problem here, as I see it, is that logic and probability as used in this thread depend on strict counterfactuality - the validity of the law of the excluded middle. So either side of the argument still presumes that it deals with a world that is crisp and particular, not vague and therefore also capable of being truly general.
In our mathematical models of probability - like coin flips, roulette wheels, packs of cards and other "games of chance" - the world is ontically determinate. Or at least we attempt to create mechanical situations that are as constrained, and therefore as determinate, as we care to make them. And in constraining nature to that degree, we then grant ourselves the privilege of maximising our own epistemic uncertainty. We can make it completely a matter of our own indifference that we don't know what the outcome of the next flip, spin, or shuffle, is going to be.
So there is a sly transfer from a real world with actual uncertainty (perhaps) to our ideal world where the world is made "ontically determinate" by an act of care, by deliberate design, and therefore we make it safe to assign all uncertainty to epistemic cause - that is, our own personal indifference about outcomes, our own lack of control about whether the next flip is heads or tails, the next swan black or not.
So there is a real danger then to take this rather artificially manufactured state of epistemic uncertainty - one modeled after games of chance - and use it to prove something about ontic reality. Just as it is a similar error to apply standard predicate logic to the real world without regard to the artificiality of the counterfactual determinism that is the LEM-style pivot of its modelling.
I have a black cat. But when it sits in the bright sun, it looks more chocolate brown. If I am reasoning about black cats, or black swans, or the ace of spades, I ignore such quibbles as a matter of indifference - for the sake of modelling. And yet back in the "real world", it could always be a (Gettier style) issue of whether some black swan is really black (their feathers too look chocolate brown in bright sun), or really a swan (maybe it is a plastic toy, or some visiting alien, that is the next example that crosses our path).
The OP, as I understand it, is concerned about how to model the world. So it talks about inductive evidence and the bolstering of states of belief (or epistemic certainty). And that in itself is best modeled, I would say, by Bayesian reasoning. So it is not really paradoxical that green apples might count as evidence in some inductively strengthening sense - even if at a huge remove. Instead it seems quite sensible that if As can be consistently B (apples keep turning up green), then by generalisation, it is more plausible that other As have their own consistent Bs (swans are black, fires are hot, cats have claws).
But predicate logic can't prove inductive beliefs, it can only sharpen their test by deductively isolating the putatively counterfactual. Swans either have blackness as a universal property ... or they don't. The problem then is that reality itself isn't so black and white. Instead - we might have good reason to believe - it is ontically vague and therefore only rises to the state of having certain well-formed propensities. Black swans are highly likely to be always black (given a certain shared history of genetic constraints). But also - as propensities express goals or purposes - at some level there will also emerge a degree of indifference. Blackness might be a matter of degree (some swans might be more chocolately than others - and evolution "doesn't care").
So green apples don't relate to black swans in any direct deductive logical fashion. Only in an inductive one. But deduction itself is founded on the un-reality of black and white counterfactuality. It is "pure model" that by design cuts the umbilical cord to the world it models (that being not a bug but a feature: the formal disconnect of the LEM is why it is so semiotically powerful a move).
And our standard models of probability - games of chance - do the same trick. They are ontically unreal in that they are manufactured situations where it is the absolute determinism (of a sign!: the suit of a card, the heads or tails of a coin, the number of a roulette slot) that underwrites a completely epistemic state of indifference (as to which sign we might next read off a device as "an unpredicted state of the world").
So we have an elaborate machinery of thought - one that by design excludes the very possibilities of ontic vagueness and ontic propensity. Both predicate logic and probability theory depend on it for their epistemic robustness. We can know the world to "be that way" because that is how we have constructed our formal acts of measurement that become all we know of the world. We reduce messy existence to some internalised play of marks - the numbers, or colours, or other values we read off the world as "facts".
But then in realising that is the semiotic game being played, this re-opens the question for metaphysics about what is really the case for ontic existence itself. If we could see past the very instruments of perception we have constructed for ourselves - these rational counterfactual modelling tricks - what would be the reality we then see?
Which is where we have to start constructing a better model - like an understanding of probability that is expanded by notions of ontic vagueness and ontic propensity (which of course is where Peirce comes in as a pioneer).