• Pfhorrest
    I am reading along, but so far no questions, or corrections, it's all clear and accurate as far as I can see. :-)
  • fdrake
    So we've got the ability to talk about natural numbers and addition, and identify expressions containing them with each other (like 1+2 = 3 = 2+1). The natural numbers are 0,1,2,3,... and so on, and it would be nice to have a formal theory of subtraction to go along with it.

    There's a pretty obvious problem with defining subtraction in the system to work in precisely the same way we're taught in school, like 1-2 = ... what? -1, says school. But -1 isn't a natural number at all. What this means is that for any sense of subtraction which works exactly like the one we have in school, the natural numbers don't let you do it in the general case.

    You can define subtraction between two natural numbers a,b whenever b is greater than or equal to a and have it work exactly as expected in that case, but being able to subtract any number from any other and obtain something that we've already defined is extremely desirable. In other words, whatever structure we define subtraction in, it has to be closed under subtraction to work as expected in every case.

    An observation that's extremely useful in setting up subtraction in the above way is the following list:

    1-1 = 0

    Which demonstrates the, probably obvious, property of subtraction that a number minus itself is always zero. And that any number plus zero is itself.

    Imagine, now, that you're not looking at familiar numbers, and instead you're looking at symbols. There is some operation such that for any x (say 2), there exists some y (say 2) that when you combine them together with the operation, you get 0. What would be nice would be to be able to take the original operation we defined, addition, and define additional elements into the structure of natural numbers so that for any number x we can guarantee the existence of some y so that we can have:


    We want to ape the ability of subtraction to take any x and ensure that we can subtract x from it and get 0, but make it a property of addition instead. This property is called the existence of an additive inverse for every element of the structure, or more generally the existence of an inverse for every element under an operation.

    Predictably, the additive inverse of 2 is (-2), the additive inverse of 1 is (-1)... the additive inverse of x is (-x).

    So the first list can be rewritten as:

    1+(-1) = 0
  • fdrake
    The same problem happens with multiplication.

    2 * x = 1

    Has no solutions when x is a natural number. That is, there are no multiplicative inverses in the natural numbers (or the integers). But there's more conceptual issues to work out here.

    For addition, it made a lot of sense to copy all the natural numbers, append - to them, then simply define -x as the additive inverse of x. Geometrically, this can be thought of as mirroring the positive number line into the negative one, inducing a copy.

    I'm going to call the natural numbers with additive inverses "the integers".

    Positive numbers

    Negative numbers


    But division is a bit more tricky; we need to define a new structure to somehow "mix" natural numbers with each other to produce parts of natural numbers which are not natural numbers.

    As always, the use of fractions preceded their axiomatization. So we can go from the use and try to guess axioms.

    When we write a fraction, it's something like 1/2, or 22/7, or 22+3/5, or 1 - 1/2 (ignoring decimals because they mean the same thing). Cases like 22+3/5 reduce to cases like 1/2; 22+3/5 = 113/5.

    What we have to work with then are pairs of numbers. We could write 113/5 as (113,5) to signify this. So we will. So all fractions can be written as a pair of integers like that. Now we've got a way of writing all the fractions, we can define operations on them.

    Multiplication and division of fractions are actually easier to define in this way than addition and subtraction.

    We want 1/2 * 4/7 to be 2/7, so if we have two fractions (a,b) and (c,d), (a,b)*(c,d) = (ab,cd), where ab is a times b as natural numbers and cd is c times d as natural numbers.

    Division follows with a minor tweak. (a,b)/(c,d) = (ad,bc); dividing by a fraction is the same as multiplying by 1/ that fraction.

    Addition is a bit harder.

    (a,b)+(c,d) = ?

    Go by example.

    We have that 2/7 + 3/5, as in school the first thing to do is write them over a common denominator. One way of getting that is just to multiply the bottom numbers together, then times each top number by the top number in the other fraction. This gives us two fractions numerically equal to the starting ones, so the sum will be the same.

    In symbols:

    2/7 + 3/5 = (5*2)/(5*7) + (7*3)/(7*5) = 20/35 + 21/35

    Then the tops can be added together as they're over a common bottom.

    2/7 + 3/5 = (5*2)/(5*7) + (7*3)/(7*5) = 20/35 + 21/35 = (20+21)/35 = 41/35

    If we replace 2,7 by a,b and 3,5 by c,d:

    (a,b) + (c,d) = (d*a,d*b)+(b*c,b*d) = (ad+bc,bd)

    Careful reading shows that we've not introduced an axiom for combining the pairs through addition rigorously. Especially since it requires the common denominator trick.

    One way of making this rigorous is to make the axiom for all integer a, (a,a) = 1 (with the assumptions that 1 behaves like it should, 1 times anything is itself). Now with that, you can simply define addition as what would result only when the fractions are expressed in terms of a common denominator. Why does this work?

    (a,b) + (c,d) = by definition = (ad+cb,bd)

    Now imagine that we have two representations of the same thing - this is called a well definition proof.

    Let's set (e,f)=(a,b) and (g,h) = (c,d). The question is, if we compute (e,f)+(g,h), do we end up at the same result as computing (a,b) + (c,d) ? How can we turn e's into a's, as it were?

    Well, we could try and say that two fractions are equal; (a,b) = (c,d), then a=c and b=d... Again, there's a but:

    Note that 1 has lots of representations, (1,1), (2,2), (-2,-2), and so on. But they all equal 1 when written as fractions, but 1 isn't equal to 2. What gives? They're clearly "morally equal", but strict equality in the last paragraph is too restrictive to get the job done.

    We were in a similar situation before; with expressions like 1+3 and 2+2, we want them to be equal, but they contain different things. We want to take collections of representations of things and glue them together.

    We actually have enough here to do the work. If we define two fractions (a,b) and (c,d) as equivalent when (c,d) = (e*a,e*b) for some integer e... Does this do the job?

    Instead of stipulating:
    (a,b) = (c,d), then a=c and b=d
    we instead stipulate
    (a,b) = (c,d), then c=e*a and d=e*b for some integer e.

    Now we can try and complete the well definition proof; if we start off with two sums of fractions, will their results necessarily match? I'm just going to write = here to mean equivalent. Two fractions will be equal (equivalent) when one is a representation of 1 (like (2,2)) times the other.

    (a,b) = (e,f), (c,d) = (g,h)
    Question: (a,b)+(c,d) = (equivalent) = (e,f)+(g,h)?

    From the equivalences, we have:

    Then using these and the definition of addition above

    (e,f)+(g,h) = (eh+gf,fh) = ( [ka][ld] + [lc][kb], [kb][ld] )
    You can rearrange the square brackets inside using the rules of natural number (and integer) arithmetic:
    (e,f)+(g,h) = (eh+gf,fh) = ( [ka][ld] + [lc][kb], [kb][ld] ) = ( [kl]ad+[kl]cb, [kl]bd)

    That last one, there's a common factor on top and bottom:
    ( [kl]ad+[kl]cb, [kl]bd) = ( [kl](ad+cb), [kl]bd )

    The common factor is the same. So we can pull it out (by the definition of multiplication of factions) as

    ( [kl](ad+cb), [kl]bd )
    = (kl,kl)*(ad+cb,bd)

    So by: two fractions will be equal (equivalent) when one is a representation of 1 (like (2,2)) times the other, the two representations come out the same, completing the proof.
  • fdrake
    Something absent from the discussion so far is the idea of an ordering. Loosely an ordering is a way of defining when one object is bigger or smaller than another. Sets themselves, natural numbers and integers have an ordering on them.

    Collections of sets have an ordering on them, the subset relationship. If we have the collection of sets:

    {1,2,3}, {1,}, {2}, {3}, {1,2}, {1,3} , {2,3}

    You can call one set X smaller than another Y if X is a subset of Y. So {1} would be less than {1,3}, since the first is a subset of the second. When there are orderings, there comes an associated idea of biggest and smallest with respect to the ordering. In the above list, {1,2,3} is the biggest element, since every element is a subset of it.

    But the sets {2} and {3} are not comparable, since neither is a subset of the other.

    Contrast natural numbers. Consider the set of natural numbers {1,2,3}. The number 3 is the biggest there; 3 is bigger than 2, 2 is bigger than 1. Every natural number is comparable to every other natural number with the usual idea of how to order them by size, but not every set is comparable to every other set when comparing them by the subset relationship.

    Setting out the ordering on the natural numbers, you can call one natural number x greater than some natural number y if and only if (definitionally) x = S^n ( y ) for some n. IE, x is bigger than y if you can apply the successor function to y a few times and get x. This immediately ensures that all natural numbers are comparable; since every natural number is a successor of 0.

    It's pretty clear that the way we compare fraction sizes resembles the way we compare natural numbers in some ways; for any two fractions, you can tell which is bigger or if they're the same. The way we compare fraction sizes does not resemble the way we compare sets by the subset relationship; there aren't pairs of fractions which are incomparable like {2} and {3} are.

    But they have differences too, any pair of fractions has a fraction inbetween them, whereas natural numbers do not. Natural numbers come in lumps of 1, the smallest positive difference between two natural numbers. Fractions don't come in smallest differences at all.

    It's also pretty clear that there's more than one way of ordering things; the kind of ordering sets have by the subset relation is not the kind of ordering natural numbers (or fractions) have by the usual way we compare them.

    What this suggests is that it's important to make axioms for the different types of ordering relationships as suits our interest. One intuitive property the ordering on the natural numbers has but the subset one lacks is that the ordering on the natural numbers appears to represent something about magnitudinal quantities; the mathematical objects numbers /are/ are sizes, and those sizes can be compared.

    Whereas, the subset relation breaks this intuition; it doesn't represent comparison of a magnitudinal property, it represents something like nesting or containedness. The UK contains Scotland and England and Northern Ireland, but Scotland doesn't contain England or Northern Ireland, and England doesn't contain Scotland or Northern Island, and Northern Island doesn't contain Scotland or England; Scotland, England and Northern Ireland can only be compared to the UK in that manner, and not to each other. Another example is that Eukaryotes contains Animals and Fungi, but Animals don't contain Fungi and Fungi don't contain animals.

    What properties then characterise the ordering of naturals and fractions, and distinguish them from the ordering of province and country or the ordering of classifications of biological kinds?
  • Pfhorrest
    Interesting, I feel like we're about to get to something I might not already be familiar with.

    Also of interest, the next essay in my Codex Quarentis series of threads will be closely related to this thread, since the writing of it was the inspiration for creating this thread. I'd love your input on it when I post it early next week.
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.