• Pfhorrest
    4.6k
    I am reading along, but so far no questions, or corrections, it's all clear and accurate as far as I can see. :-)
  • fdrake
    6.7k
    So we've got the ability to talk about natural numbers and addition, and identify expressions containing them with each other (like 1+2 = 3 = 2+1). The natural numbers are 0,1,2,3,... and so on, and it would be nice to have a formal theory of subtraction to go along with it.

    There's a pretty obvious problem with defining subtraction in the system to work in precisely the same way we're taught in school, like 1-2 = ... what? -1, says school. But -1 isn't a natural number at all. What this means is that for any sense of subtraction which works exactly like the one we have in school, the natural numbers don't let you do it in the general case.

    You can define subtraction between two natural numbers a,b whenever b is greater than or equal to a and have it work exactly as expected in that case, but being able to subtract any number from any other and obtain something that we've already defined is extremely desirable. In other words, whatever structure we define subtraction in, it has to be closed under subtraction to work as expected in every case.

    An observation that's extremely useful in setting up subtraction in the above way is the following list:

    1-1 = 0
    2-2=0
    3-3=0
    4-4=0
    ...
    1+0=1
    2+0=2
    3+0=3
    4+0=4
    ...

    Which demonstrates the, probably obvious, property of subtraction that a number minus itself is always zero. And that any number plus zero is itself.

    Imagine, now, that you're not looking at familiar numbers, and instead you're looking at symbols. There is some operation such that for any x (say 2), there exists some y (say 2) that when you combine them together with the operation, you get 0. What would be nice would be to be able to take the original operation we defined, addition, and define additional elements into the structure of natural numbers so that for any number x we can guarantee the existence of some y so that we can have:

    x+y=0=y+x

    We want to ape the ability of subtraction to take any x and ensure that we can subtract x from it and get 0, but make it a property of addition instead. This property is called the existence of an additive inverse for every element of the structure, or more generally the existence of an inverse for every element under an operation.

    Predictably, the additive inverse of 2 is (-2), the additive inverse of 1 is (-1)... the additive inverse of x is (-x).

    So the first list can be rewritten as:

    1+(-1) = 0
    2+(-2)=0
    3+(-3)=0
    4+(-4)=0
  • fdrake
    6.7k
    The same problem happens with multiplication.

    2 * x = 1

    Has no solutions when x is a natural number. That is, there are no multiplicative inverses in the natural numbers (or the integers). But there's more conceptual issues to work out here.

    For addition, it made a lot of sense to copy all the natural numbers, append - to them, then simply define -x as the additive inverse of x. Geometrically, this can be thought of as mirroring the positive number line into the negative one, inducing a copy.

    I'm going to call the natural numbers with additive inverses "the integers".

    Positive numbers
    0------->

    Negative numbers
    <------0

    Integers
    <-------0------>

    But division is a bit more tricky; we need to define a new structure to somehow "mix" natural numbers with each other to produce parts of natural numbers which are not natural numbers.

    As always, the use of fractions preceded their axiomatization. So we can go from the use and try to guess axioms.

    When we write a fraction, it's something like 1/2, or 22/7, or 22+3/5, or 1 - 1/2 (ignoring decimals because they mean the same thing). Cases like 22+3/5 reduce to cases like 1/2; 22+3/5 = 113/5.

    What we have to work with then are pairs of numbers. We could write 113/5 as (113,5) to signify this. So we will. So all fractions can be written as a pair of integers like that. Now we've got a way of writing all the fractions, we can define operations on them.

    Multiplication and division of fractions are actually easier to define in this way than addition and subtraction.

    We want 1/2 * 4/7 to be 2/7, so if we have two fractions (a,b) and (c,d), (a,b)*(c,d) = (ab,cd), where ab is a times b as natural numbers and cd is c times d as natural numbers.

    Division follows with a minor tweak. (a,b)/(c,d) = (ad,bc); dividing by a fraction is the same as multiplying by 1/ that fraction.

    Addition is a bit harder.

    (a,b)+(c,d) = ?

    Go by example.

    We have that 2/7 + 3/5, as in school the first thing to do is write them over a common denominator. One way of getting that is just to multiply the bottom numbers together, then times each top number by the top number in the other fraction. This gives us two fractions numerically equal to the starting ones, so the sum will be the same.

    In symbols:

    2/7 + 3/5 = (5*2)/(5*7) + (7*3)/(7*5) = 20/35 + 21/35

    Then the tops can be added together as they're over a common bottom.

    2/7 + 3/5 = (5*2)/(5*7) + (7*3)/(7*5) = 20/35 + 21/35 = (20+21)/35 = 41/35

    If we replace 2,7 by a,b and 3,5 by c,d:

    (a,b) + (c,d) = (d*a,d*b)+(b*c,b*d) = (ad+bc,bd)

    Careful reading shows that we've not introduced an axiom for combining the pairs through addition rigorously. Especially since it requires the common denominator trick.

    One way of making this rigorous is to make the axiom for all integer a, (a,a) = 1 (with the assumptions that 1 behaves like it should, 1 times anything is itself). Now with that, you can simply define addition as what would result only when the fractions are expressed in terms of a common denominator. Why does this work?

    (a,b) + (c,d) = by definition = (ad+cb,bd)

    Now imagine that we have two representations of the same thing - this is called a well definition proof.

    Let's set (e,f)=(a,b) and (g,h) = (c,d). The question is, if we compute (e,f)+(g,h), do we end up at the same result as computing (a,b) + (c,d) ? How can we turn e's into a's, as it were?

    Well, we could try and say that two fractions are equal; (a,b) = (c,d), then a=c and b=d... Again, there's a but:

    Note that 1 has lots of representations, (1,1), (2,2), (-2,-2), and so on. But they all equal 1 when written as fractions, but 1 isn't equal to 2. What gives? They're clearly "morally equal", but strict equality in the last paragraph is too restrictive to get the job done.

    We were in a similar situation before; with expressions like 1+3 and 2+2, we want them to be equal, but they contain different things. We want to take collections of representations of things and glue them together.

    We actually have enough here to do the work. If we define two fractions (a,b) and (c,d) as equivalent when (c,d) = (e*a,e*b) for some integer e... Does this do the job?

    Instead of stipulating:
    (a,b) = (c,d), then a=c and b=d
    we instead stipulate
    (a,b) = (c,d), then c=e*a and d=e*b for some integer e.

    Now we can try and complete the well definition proof; if we start off with two sums of fractions, will their results necessarily match? I'm just going to write = here to mean equivalent. Two fractions will be equal (equivalent) when one is a representation of 1 (like (2,2)) times the other.

    (a,b) = (e,f), (c,d) = (g,h)
    Question: (a,b)+(c,d) = (equivalent) = (e,f)+(g,h)?

    From the equivalences, we have:
    e=ka
    f=kb
    g=lc
    h=ld

    Then using these and the definition of addition above

    (e,f)+(g,h) = (eh+gf,fh) = ( [ka][ld] + [lc][kb], [kb][ld] )
    You can rearrange the square brackets inside using the rules of natural number (and integer) arithmetic:
    (e,f)+(g,h) = (eh+gf,fh) = ( [ka][ld] + [lc][kb], [kb][ld] ) = ( [kl]ad+[kl]cb, [kl]bd)

    That last one, there's a common factor on top and bottom:
    ( [kl]ad+[kl]cb, [kl]bd) = ( [kl](ad+cb), [kl]bd )

    The common factor is the same. So we can pull it out (by the definition of multiplication of factions) as

    ( [kl](ad+cb), [kl]bd )
    = (kl,kl)*(ad+cb,bd)

    So by: two fractions will be equal (equivalent) when one is a representation of 1 (like (2,2)) times the other, the two representations come out the same, completing the proof.
  • fdrake
    6.7k
    Something absent from the discussion so far is the idea of an ordering. Loosely an ordering is a way of defining when one object is bigger or smaller than another. Sets themselves, natural numbers and integers have an ordering on them.

    Collections of sets have an ordering on them, the subset relationship. If we have the collection of sets:

    {1,2,3}, {1,}, {2}, {3}, {1,2}, {1,3} , {2,3}

    You can call one set X smaller than another Y if X is a subset of Y. So {1} would be less than {1,3}, since the first is a subset of the second. When there are orderings, there comes an associated idea of biggest and smallest with respect to the ordering. In the above list, {1,2,3} is the biggest element, since every element is a subset of it.

    But the sets {2} and {3} are not comparable, since neither is a subset of the other.

    Contrast natural numbers. Consider the set of natural numbers {1,2,3}. The number 3 is the biggest there; 3 is bigger than 2, 2 is bigger than 1. Every natural number is comparable to every other natural number with the usual idea of how to order them by size, but not every set is comparable to every other set when comparing them by the subset relationship.

    Setting out the ordering on the natural numbers, you can call one natural number x greater than some natural number y if and only if (definitionally) x = S^n ( y ) for some n. IE, x is bigger than y if you can apply the successor function to y a few times and get x. This immediately ensures that all natural numbers are comparable; since every natural number is a successor of 0.
    Reveal
    (equivalently, if one natural number x represented as a collection of sets is a subset of another y then x<y !)


    It's pretty clear that the way we compare fraction sizes resembles the way we compare natural numbers in some ways; for any two fractions, you can tell which is bigger or if they're the same. The way we compare fraction sizes does not resemble the way we compare sets by the subset relationship; there aren't pairs of fractions which are incomparable like {2} and {3} are.

    But they have differences too, any pair of fractions has a fraction inbetween them, whereas natural numbers do not. Natural numbers come in lumps of 1, the smallest positive difference between two natural numbers. Fractions don't come in smallest differences at all.

    It's also pretty clear that there's more than one way of ordering things; the kind of ordering sets have by the subset relation is not the kind of ordering natural numbers (or fractions) have by the usual way we compare them.

    What this suggests is that it's important to make axioms for the different types of ordering relationships as suits our interest. One intuitive property the ordering on the natural numbers has but the subset one lacks is that the ordering on the natural numbers appears to represent something about magnitudinal quantities; the mathematical objects numbers /are/ are sizes, and those sizes can be compared.

    Whereas, the subset relation breaks this intuition; it doesn't represent comparison of a magnitudinal property, it represents something like nesting or containedness. The UK contains Scotland and England and Northern Ireland, but Scotland doesn't contain England or Northern Ireland, and England doesn't contain Scotland or Northern Island, and Northern Island doesn't contain Scotland or England; Scotland, England and Northern Ireland can only be compared to the UK in that manner, and not to each other. Another example is that Eukaryotes contains Animals and Fungi, but Animals don't contain Fungi and Fungi don't contain animals.

    What properties then characterise the ordering of naturals and fractions, and distinguish them from the ordering of province and country or the ordering of classifications of biological kinds?
  • Pfhorrest
    4.6k
    Interesting, I feel like we're about to get to something I might not already be familiar with.

    Also of interest, the next essay in my Codex Quarentis series of threads will be closely related to this thread, since the writing of it was the inspiration for creating this thread. I'd love your input on it when I post it early next week.
  • Pfhorrest
    4.6k
    Resurrecting this thread to get feedback on something I've written since, basically my own summary of what I hoped this thread would produce, and I'd love some feedback on the correctness of it from people better-versed than me in mathematics or physics:

    Mathematics is essentially just the application of pure logic: a mathematical object is defined by fiat as whatever obeys some specified rules, and then the logical implications of that definition, and the relations of those kinds of objects to each other, are explored in the working practice of mathematics. Numbers are just one such kind of objects, and there are many others, but in contemporary mathematics, all of those structures have since been grounded in sets.

    The natural numbers, for instance, meaning the counting numbers {0, 1, 2, 3, ...}, are easily defined in terms of sets. First we define a series of sets, starting with the empty set, and then a set that only contains that one empty set, and then a set that only contains those two preceding sets, and then a set that contains only those three preceding sets, and so on, at each step of the series defining the next set as the union of the previous set and a set containing only that previous set. We can then define some set operations (which I won't detail here) that relate those sets in that series to each other in the same way that the arithmetic operations of addition and multiplication relate natural numbers to each other.

    We could name those sets and those operations however we like, but if we name the series of sets "zero", "one", "two", "three", and so on, and name those operations "addition" and "multiplication", then when we talk about those operations on that series of sets, there is no way to tell if we are just talking about some made-up operations on a made-up series of sets, or if we were talking about actual addition and multiplication on actual natural numbers: all of the same things would be necessarily true in both cases, e.g. doing the set operation we called "addition" on the set we called "two" and another copy of that set called "two" creates the set that we called "four". Because these sets and these operations on them are fundamentally indistinguishable from addition and multiplication on numbers, they are functionally identical: those operations on those sets just are the same thing as addition and multiplication on the natural numbers.

    All kinds of mathematical structures, by which I don't just mean a whole lot of different mathematical structures but literally every mathematical structure studied in mathematics today, can be built up out of sets this way. The integers, or whole numbers, can be built out of the natural numbers (which are built out of sets) as equivalence classes (a kind of set) of ordered pairs (a kind of set) of natural numbers, meaning in short that each integer is identical to some set of equivalent sets of two natural numbers in order, those sets of two natural numbers in order that are equal when one is subtracted from the other: the integers are all the things you can get by subtracting one natural number from another. Similarly, the rational numbers can be defined as equivalence classes of ordered pairs of integers in a way that means that the rationals are the things you can get by dividing one integer by another.

    The real numbers, including irrational numbers like pi and the square root of 2, can be constructed out of sets of rational numbers in a process too complicated to detail here (something called a Dedekind-complete ordered field, where a field is itself a kind of set). The complex numbers, including things like the square root of negative one, can be constructed out of ordered pairs of real numbers; and further hypercomplex numbers, including things called quaternions and octonions, can be built out of larger ordered sets of real numbers, which are built out of complicated sets of rational numbers, which are built out of sets of integers, which are built out of sets of natural numbers, which are built out of sets built out of sets of just the empty set. So from nothing but the empty set, we can build up to all complicated manner of fancy numbers.

    But it is not just numbers that can be built out of sets. For example, all manner of geometric objects are also built out of sets as well. All abstract geometric objects can be reduced to sets of abstract geometric points, and a kind of function called a coordinate system maps such sets of points onto sets of numbers in a one-to-one manner, which is hence reversible: a coordinate system can be seen as turning sets of numbers into sets of points as well. For example, the set of real numbers can be mapped onto the usual kind of straight, continuous line considered in elementary geometry, and so the real numbers can be considered to form such a line; similarly, the complex numbers can be considered to form a flat, continuous plane. Different coordinate systems can map different numbers to different points without changing any features of the resulting geometric object, so the points, of which all geometric objects are built, can be considered the equivalence classes (a kind of set) of all the numbers (also made of sets) that any possible coordinate system could map to them. Things like lines and planes are examples of the more general type of object called a space.

    Spaces can be very different in nature depending on exactly how they are constructed, but a space that locally resembles the usual kind of straight and flat spaces we intuitively speak of (called Euclidian spaces) is an object called a manifold, and such a space that, like the real number line and the complex number plane, is continuous in the way required to do calculus on it, is called a differentiable manifold. Such a differentiable manifold is basically just a slight generalization of the usual kind of flat, continuous space we intuitively think of space as being, and it, as shown, can be built entirely out of sets of sets of ultimately empty sets.

    Meanwhile, a special type of set defined such that any two elements in it can be combined through some operation to produce a third element of it, in a way obeying a few rules that I won't detail here, constitutes a mathematical object called a group. A differentiable manifold, being a set, can also be a group, if it follows the rules that define a group, and when it does, that is called a Lie group. Also meanwhile, another special kind of set whose members can be sorted into a two-dimensional array constitutes a mathematical object called a matrix, which can be treated in many ways like a fancy kind of number that can be added, multiplied, etc.

    A square matrix (one with its dimensions being of equal length) of complex numbers that obeys some other rules that I once again won't detail here is called a unitary matrix. Matrices can be the "numbers" that make up a geometric space, including a differentiable manifold, including a Lie group, and when a Lie group is made of unitary matrices, it constitutes a unitary group. And lastly, a unitary group that obeys another rule I won't bother detailing here is called a special unitary group. This makes a special unitary group essentially a space of the kind we would intuitively expect a space to be like — locally flat-ish, smooth and continuous, etc — but where every point in that space is a particular kind of square matrix of complex numbers, that all obey certain rules under certain operations on them, with different kinds of special unitary groups being made of matrices of different sizes.

    I have hastily recounted here the construction of this specific and complicated mathematical object, the special unitary group, out of bare, empty sets, because that special unitary group is considered by contemporary theories of physics to be the fundamental kind of thing that the most elementary physical objects, quantum fields, are literally made of.
  • jgill
    3.9k
    Different coordinate systems can map different numbers to different points without changing any features of the resulting geometric objectPfhorrest

    You probably need to qualify this. Take the circle x^2+y^2=1 in the standard Euclidean plane and lengthen the scale on the x-axis, so that the circle becomes an ellipse. That's a "different coordinate system".

    Amazing how the physicists can use those groups. :cool:
  • Kenosha Kid
    3.2k
    I wish I had this site when I was at school, because I suspect that, with the right wording, you could make @fdrake do a lot of your homework.

    Interesting thread. Ambitious too.
  • Pfhorrest
    4.6k
    You probably need to qualify this. Take the circle x^2+y^2=1 in the standard Euclidean plane and lengthen the scale on the x-axis, so that the circle becomes an ellipse. That's a "different coordinate system".jgill

    I'm not sure I understand what you mean here. You can describe the same circle with different coordinates in a distorted coordinate system like that; or you can describe a different ellipse with the same coordinates in that different coordinate system. I'm not sure which of those scenarios you're referring to.

    I'm talking about the first scenario: the circle is unchanged, but different numbers map to its points in the different coordinate system, and the points can each be considered the equivalence classes of all different sets of numbers that can map to them from all coordinate systems.

    I wish I had this site when I was at school, because I suspect that, with the right wording, you could make fdrake do a lot of your homework.Kenosha Kid

    Yeah @fdrake is awesome and I would love to see him continue what he was doing in this thread; or for someone else to take over where he left off.
  • jgill
    3.9k
    GL(2,C) is a Lie group corresponding to the group of linear fractional transformations (LFTs). When you matrix multiply two 2X2 such creatures it's the same as composing one transformation with the other. Each such transformation maps Circles to Circles in C (a line is construed to be an infinite circle), so these transformations preserve those geometric objects in the complex plane. Otherwise, in the complex plane simple translations and rotations preserve geometric figures. I've done lots of exploring of the properties of LFTs, but all of it as analytic or meromorphic functions from an analytic rather than an algebraic perspective. Very old fashioned
    of me.

    @fdrake is far more up to date and knowledgeable.
  • Kenosha Kid
    3.2k
    Yeah fdrake is awesome and I would love to see him continue what he was doing in this thread; or for someone else to take over where he left off.Pfhorrest

    I can do some more of the basic axiomatic maths, but I've been cheating and looking ahead at axiomatic QFT and decided that I really need to study more mathematics. I don't know C*-algebra from a 32C-wonderbra
  • fdrake
    6.7k
    I wish I had this site when I was at school, because I suspect that, with the right wording, you could make fdrake do a lot of your homework.Kenosha Kid

    Lots of people asked me to help them with their homework when I was at school and uni! Never could say outright no to it. Unless someone wanted me to do it for them rather than get help learning it. Have no sympathy for the first motivation.
  • fdrake
    6.7k
    Interesting thread. Ambitious too.Kenosha Kid

    Yeah fdrake is awesome and I would love to see him continue what he was doing in this thread; or for someone else to take over where he left off.Pfhorrest

    Thank you!

    I've been thinking about going back to it for some time now. I probably will. It would be nice to have one way of telling the story in one place.
  • fdrake
    6.7k
    I can do some more of the basic axiomatic maths, but I've been cheating and looking ahead at axiomatic QFT and decided that I really need to study more mathematics. I don't know C*-algebra from a 32C-wonderbraKenosha Kid

    I only really know the story up until "this is a differential equation with real variables". Complex analysis stuff and actual physics is mostly above my pay grade. It'd be pretty cool if @Kenosha Kid, @jgill and I could actually get up to something like the Schrodinger equation rather than f=ma, which is what I planned to stop at (but stopped just short of the ordering of the reals).
  • Kenosha Kid
    3.2k
    I think we can limit complex analysis to that which is needed to prove the relativistic Hohenberg Kohn theorems and maybe get from the real observables of axiomatic QFT to the realdensities of density functional theory with only complex conjugates. From then on, any complex wavefunction can just be replaced by the time-dependent charge and current densities in principle. Would that be sufficient?
  • fdrake
    6.7k


    Would that be sufficient?Kenosha Kid

    There should be a sound effect for a field flying over another mathematician's head.

    I have no idea!
  • dussias
    52


    I'm a huge fan of discrete maths; I know your proof by heart!

    Have you thought about machine learning?

    The models can predict our behavior, meaning that their inner logic, which is always based on math, try to accomplish what you suggest.

    The problem is that they're not explicit/comprehensible by humans. They're sort of black boxes, like functions.

    What do you think about this?
  • Kenosha Kid
    3.2k
    I meant, would it be easier if we only needed conjugation from complex analysis, then I remembered that pretty much everything in QFT is commutators. Still... that's not too difficult either.

    I know how to get from simple fields to wavefunctions and from densities to wavefunctions uniquely -- that's simple enough.
  • fdrake
    6.7k
    I know how to get from simple fields to wavefunctions and from densities to wavefunctions uniquely -- that's simple enough.Kenosha Kid

    Are you talking physics field or mathematics field? Field as a mapping from, say, the plane to vectors in the plane (physics field), or field as a commutative ring with multiplicative inverses (mathematics field)?
  • Kenosha Kid
    3.2k
    Sorry, specifically a quantum field.
  • Pfhorrest
    4.6k
    Machine learning is fascinating, but kind of orthogonal to what I’m hoping yo accomplish in this thread. We have a black box example of all of this stuff available already: the actual universe. I want to sketch out a rough version of what’s going on in that black box, stepping up through sets, numbers, spaces, differentiable manifolds, Lie groups, quantum fields, particles, chemicals, cells, organisms, etc.
  • Pfhorrest
    4.6k
    @fdrake@Kenosha Kid@jgill glad to see you all talking about collaborating on this! I feel like I have a very very superficial understanding of most of the steps along the way, and you all have such deeper knowledge in your respective ways that I would love to see spelled out here.

    I’m especially curious about the steps between SU groups, and excitations of quantum fields into particles. Like, I have a vague understanding that an SU group is a kind of space with a particular sort of matrix of complex numbers at every point in it, and I imagine that that space is identified with a quantum field and so what’s happening in those matrices at each point is what’s happening with the quantum field at that point, but I have no clue what the actual relationship between the quantum field states and the complex matrices is.
  • dussias
    52
    What’s going on in that black box, stepping up through sets, numbers, spaces, differentiable manifolds, Lie groups, quantum fields, particles, chemicals, cells, organisms, etc.Pfhorrest

    You've said it yourself. Math is our ever-evolving language for modeling anything.

    If you really want to get down to detail, I suggest focusing on particular areas.

    For example, language is an interesting one.


    • A language 'L' is defined as a set of symbols (elements) and their possible combinations.
    • A grammar 'G' is a subset of a language's combinations.

    And you build from there.

    Chomsky worked hard on this theory, but I'm no expert. What I can attest to is art and creation , but I'm not ready at this moment to lay out a fitting mathematical approach.

    I wish you the best of fates!
  • fdrake
    6.7k


    So you'd need linear operators on vector spaces, differentiation+integration, complex numbers... If I gave you definitions of those things, would you be able to to do what you needed to do with them? Can you do the thing where you go from linear operators on vector spaces to linear operators on modules if required?

    The vector space construction needs mathematical fields (commutative rings with multiplicative inverses), which needs groups.
  • fdrake
    6.7k
    And you build from there.dussias

    That's roughly where I started - gesturing in the direction of formal languages (production rules on a collection of strings).
  • jgill
    3.9k
    I'm picking up tiny tidbits of math here that relate to topics I am familiar with, but it's all pretty fuzzy. For instance, , and the matrix multiplication in SU(2) corresponds to compositions of bilinear transforms - as I mentioned before. To elaborate:



    Corresponds to



    And multiplication is the following



    edit: But the matrix represents a quaternion, primarily. I.e., extending the complex numbers to a higher dimension. In the above, alpha=a+bi and beta=-c+di to give the quaternion a+bi+cj+dk.
    SU(2) seems to associate with spin. The multiplication of two such 2X2 matrices may give the Hamiltonian product, or maybe not. Too much work involved.
  • Kenosha Kid
    3.2k
    So you'd need linear operators on vector spaces, differentiation+integration, complex numbers... If I gave you definitions of those things, would you be able to to do what you needed to do with them? Can you do the thing where you go from linear operators on vector spaces to linear operators on modules if required?

    The vector space construction needs mathematical fields (commutative rings with multiplicative inverses), which needs groups.
    fdrake

    Yes to all of the above. I'm still catching up, I can do complex numbers if need be, I've done it before. The other thing we need is category theory. I don't think we need to do any integrals but we need to know about them. I guess a good aim atm is continuum mathematics.
  • fdrake
    6.7k
    I guess a good aim atm is continuum mathematics.Kenosha Kid

    I'll be in quarantine for a couple of weeks soon. I shall try and get the field of real numbers with its order defined in that time.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.