## Attempt at an intuitive explanation (ELI12) for the weirdest logic theorem ever (Gödel-Carnap)

Next
• 702
Therefore we cannot prove a statement that cannot be proven in this axiomatic system is semantically true, and to the extent of our logical method, this new axiomatic system can never be proven incomplete.

I think that there are two remarks that are possibly relevant in this context.

First. These "true but unprovable" statements will appear to use as simply undecidable. Indeed, how are we even going to decide that they are "true", if we cannot possibly prove the statement nor disprove its opposite? Hence, the practical qualification will not be "true but unprovable" but "undecidable".

Second. Number theory (by default: Dedekind-Peano) is a sub-theory of set theory (by default: ZFC), meaning that every number-theoretical theorem can be proven in set theory, but not the other way around. Now, there really are number-theoretical theorems that are fundamentally undecidable in number theory (=DP), but provable in the larger set theory (=ZFC):

Undecidable statements provable in larger systems

These are natural mathematical equivalents of the Gödel "true but undecidable" sentence. They can be proved in a larger system which is generally accepted as a valid form of reasoning, but are undecidable in a more limited system such as Peano Arithmetic.

In 1977, Paris and Harrington proved that the Paris–Harrington principle, a version of the infinite Ramsey theorem, is undecidable in (first-order) Peano arithmetic, but can be proved in the stronger system of second-order arithmetic. Kirby and Paris later showed that Goodstein's theorem, a statement about sequences of natural numbers somewhat simpler than the Paris–Harrington principle, is also undecidable in Peano arithmetic.

Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable based on a philosophy of mathematics called predicativism. The related but more general graph minor theorem (2003) has consequences for computational complexity theory.

Now, we may not have fundamentally stronger theories than set theory (=ZFC). In other words, there may not be such larger theory of which set theory would just be a sub-theory. Therefore, when a theorem is undecidable in set theory, it may even be absolutely undecidable. One reason why we may not go higher nor further than ZFC, is because ZFC is already at the level of Turing-completeness -- can compute everything that is computable and therefore: can represent all knowledge that can be expressed in language. ZFC is possibly even stronger than that:

What is the relationship between ZFC and Turing machine?

In the other direction, a universal Turing machine can be encoded in ZFC, indeed in far weaker theories, such as relatively small fragments of number theory.
...
Suppose, for reductio, that a Turing machine could decide whether the sentence with Gödel number n
is a theorem of ZFC ... Now, by the diagonalization lemma, there would be a Gödel-style sentence γ such that ... Contradiction is immediate. [meaning: ZFC is stronger than Turing-complete]

On the other hand, Turing Completeness is already a seriously constraining limit:

All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically.

This does not mean that there are no attempts at increasing the strength of ZFC by adding new axioms. The web is plastered with such attempts.

Still, successfully arguing that a new axiom is independent of the ten existing axioms in ZFC is a non-trivial job, given the strength of ZFC. After that, they still need to successfully prove from this extended system, a theorem that is provably undecidable in ZFC, such as for example the Continuum Hypothesis. That job is certainly not easy ...
• 78

Thanks for the response. I'm sorry I couldn't respond earlier.

These "true but unprovable" statements will appear to use as simply undecidable.
I couldn't express myself properly, but this is what I meant. I was attempting to construct an axiomatic system, whose undecidable sentences are not amenable to further evaluation by means of metalogic. The idea was to start with incomplete axiomatic system and to import proofs of undecidable statements by continuously adding meta-axioms. There were problems with this approach.

Number theory (by default: Dedekind-Peano) is a sub-theory of set theory (by default: ZFC), meaning that every number-theoretical theorem can be proven in set theory, but not the other way around. Now, there really are number-theoretical theorems that are fundamentally undecidable in number theory (=DP), but provable in the larger set theory (=ZFC):
If I understand you correctly, there are axiomatic systems which preserve the truth values of all statements of Peano arithmetic, but make previously undecidable statements decidable? It is a curious fact. So, we don't actually have an axiomatic system yet, in which all statements of Peano arithmetic are decidable?

Now, we may not have fundamentally stronger theories than set theory (=ZFC). In other words, there may not be such larger theory of which set theory would just be a sub-theory. Therefore, when a theorem is undecidable in set theory, it may even be absolutely undecidable.
Just thinking out loud, I wonder whether we would know if ZFC is semantically complete or not, if it were deductively incomplete? And what would be our course of action if we can't know? And on a related note, I wonder whether a theorem with the same quality as Chaitin's constant for Turing machines can be discovered for ZFC - having definite truth value, but no proof in logic or meta-logic.

One reason why we may not go higher nor further than ZFC, is because ZFC is already at the level of Turing-completeness -- can compute everything that is computable and therefore: can represent all knowledge that can be expressed in language. ZFC is possibly even stronger than that
Yes, I think that it is also intuitive that ZFC cannot be weaker than a universal Turing machine. I also thought that it cannot be stronger. Since non-deterministic Turing machines are computationally equivalent to their deterministic counterpart and should be able to simulate successfully any deductive system. I mean - that was my intuition anyhow.

On the other hand, Turing Completeness is already a seriously constraining limit:

All known laws of physics have consequences that are computable by a series of approximations on a digital computer. A hypothesis called digital physics states that this is no accident because the universe itself is computable on a universal Turing machine. This would imply that no computer more powerful than a universal Turing machine can be built physically.
I have wondered about this. I am not intimately familiar with the hypothesis, but it becomes ever more so obvious that material and abstract sciences are intertwined together. There are some implicit assumptions here, like discrete finite state and determinism. If I recall correctly non-determinism cannot produce more computability. But it can produce more outcomes, and I cannot tell whether this can contribute to nature somehow. Analog or infinite state may be unfeasible, as per QM. The hypothesis does depend on a number of assumptions, but may be true nonetheless.

Thanks again for the response. It is thought provoking.
• 702
If I understand you correctly, there are axiomatic systems which preserve the truth values of all statements of Peano arithmetic, but make previously undecidable statements decidable?

By the way, there is a problem with the terminology. Decidability is about computability:

Decidability should not be confused with completeness. For example, the theory of algebraically closed fields is decidable but incomplete, whereas the set of all true first-order statements about nonnegative integers in the language with + and × is complete but undecidable. Unfortunately, as a terminological ambiguity, the term "undecidable statement" is sometimes used as a synonym for independent statement.

In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer.

In mathematical logic, independence is the unprovability of a sentence from other sentences. A sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T, and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T; this is not the same meaning of "decidability" as in a decision problem.

There seems to be a preference to use the term "independent" (mathematics) instead of "undecidable" (computer science) for "unprovability" of the statement or its negation. The problem is caused by the fact that these terms emerged from two different domains, but are very related. Given the Curry-Howard Correspondence, undecidability and independence overlap much stronger than suggested by these commentators:

In programming language theory and proof theory, the Curry–Howard correspondence (also known as the Curry–Howard isomorphism or equivalence, or the proofs-as-programs and propositions- or formulae-as-types interpretation) is the direct relationship between computer programs and mathematical proofs.

The Curry–Howard correspondence is the observation that two families of seemingly unrelated formalisms—namely, the proof systems on one hand, and the models of computation on the other—are in fact the same kind of mathematical objects. If one abstracts on the peculiarities of either formalism, the following generalization arises: a proof is a program, and the formula it proves is the type for the program.

The difference between "undecidable" and "independent" seems to depend mostly "on the peculiarities of either formalism" rather than on truly fundamental issues.

The canonical example of a theorem not provable in Peano arithmetic but in ZFC is:

Kruskal's tree theorem, which has applications in computer science, is also undecidable from Peano arithmetic but provable in set theory.

In fact, the commentators below argue that this does not mean that ZFC is "stronger" than PA. The fact that ZFC accepts the theorem while PA doesn't, is rather a trust issue.

So, we don't actually have an axiomatic system yet, in which all statements of Peano arithmetic are decidable?

I have found the following (formal) thought exercise on exactly this matter. The author calls such encompassing axiomatic system a "provability oracle for Peano arithmetic":

So having a provability oracle for PA or any other consistent formal system that proves some valid arithmetic truths (like ZFC) is equivalent to having a halting oracle, and thus leads to a provability oracle for any other formal system. In other words, if you knew all about the logical implications of PA, then you would also know all about the logical implications of ZFC and all other formal systems. Hee hee.

PA can see everybody else's theorems as Gödel numbers. So, it actually "knows" everything what these other systems know.

Then, I found the following intriguing remark as a comment to the article on the provability oracle for PA:

More powerful formal systems are not more powerful because they know more stuff. They're more powerful because they have more confidence. PA proves that PA+Con(PA) proves Con(PA) (the proof, presumably, is not very long), but PA does not trust PA+Con(PA) to be accurate. ZFC is even more confident - has even fewer models - and so is even stronger. (Maybe it even has no models!)

ZFC is not more powerful than PA because it knows more stuff. In fact, that would not be possible. Such theorem (as well as its proof) would still be a Gödel number which PA can also reach. Therefore, PA actually "knows" all possible theorems (and their proofs) that can be expressed in language. Therefore, it also knows all the axioms and all theorems (including their proofs) of all the other systems. The problem is that PA does not "trust" them.

Therefore, ZFC does not "know" more than PA. It just trusts more. In some ways, that is not really "stronger" or "more powerful". It could just be that ZFC is more gullible.
• 78
By the way, there is a problem with the terminology. Decidability is about computability:
...
...
The difference between "undecidable" and "independent" seems to depend mostly "on the peculiarities of either formalism" rather than on truly fundamental issues.
I think I understand the nuance. There are undecidable deductively complete theories, so claiming that an undecidable statement makes a theory deductively incomplete sounds strange. Saying that the statement is independent from the axioms, purely as a vocabulary, reduces the confusion.

I will also confess - I realized what profound misunderstanding of Gödel's semantic incompleteness I had. I believed that the theorem states that a valid (i.e. true in every model) statement can be independent from the axioms, which indeed would be exciting. Instead, it only states that a satisfiable statement can be independent. (Edit: And that such statement always exists for powerful enough theories.) The Gödel completeness theorem actually states the contrary to my prior understanding. That for first-order theories, validity and provability are the same. What a terrible confusion on my end. :)

Which makes me wonder. Since the semantic incompleteness proof holds for the standard model, what is the "intended" interpretation for other theories. Is there a criterion for a model being standard in general. For example, in denotational semantics of programming languages, a minimal interpretation is the least fixed point of a certain interpretation operator. I wonder, whether this has analogy for model theory. Such as, by using the compactness theorem and gradually building the interpretation domain, by arriving at domain entities using Skolemization or something.

I also couldn't clear up if ZFC is decidable, undecidable, or not yet established. Wikipedia indicates that there are only decidable sublanguages, while a stackexchange answer indicates that ZFC is recursively enumerable, which if mu-recursively enumerable, should mean that ZFC is decidable by Turing machines.

Given the Curry-Howard Correspondence, undecidability and independence overlap much stronger than suggested
I find the Curry-Howard Correspondence a little strange. I'm sure it makes sense, but likening axioms to a pre-execution invariant and theorems to a post-execution invariant appears complicated. It may have something to do with formal verification processes, but for me, the relationship between proofs and computation appears to be about enumeration of proofs by turing machines in one direction, and the generation of booleans on the Turing tape for the proven theorems after every inference step in the other direction.

ZFC is not more powerful than PA because it knows more stuff. In fact, that would not be possible. Such theorem (as well as its proof) would still be a Gödel number which PA can also reach. Therefore, PA actually "knows" all possible theorems (and their proofs) that can be expressed in language. Therefore, it also knows all the axioms and all theorems (including their proofs) of all the other systems. The problem is that PA does not "trust" them.
After clearing up my error in understanding, hopefully, I am not very surprised. Indeed, because of Gödel's completeness theorem, if a theory were able to prove a statement of PA that PA iteself couldn't, it would have fewer models. The stronger theory would not simply be deductively stronger, but semantically stronger. I was mistakenly excited that a theory with equivalent semantics (one that is expansion of PA) could cure PA's incompleteness. I was hoping that because the new axioms are not statements of PA itself, but their interpretations, they might result in some completely different configuration, without restricting the model space. But this is apparently not the case, at least for first-order logic.

Edit: correction about Gödel's completeness theorem - I used the term unsatisfiability, where I wanted to say not being valid or unsatisfiable (corrected now)
• 702
I also couldn't clear up if ZFC is decidable, undecidable, or not yet established. Wikipedia indicates that there are only decidable sublanguages, while a stackexchange answer indicates that ZFC is recursively enumerable, which if mu-recursively enumerable, should mean that ZFC is decidable by Turing machines.

ZFC is subject to the diagonal lemma, and therefore is syntactically incomplete. Hence, it must necessarily have undecidable statements that can be expressed in first-order logic.

Since the semantic incompleteness proof holds for the standard model, what is the "intended" interpretation for other theories.

Every legitimate answer that is true, has a proof (semantic completeness), but there exist legitimate questions for which an answer cannot be determined (syntactic incompleteness).

Concerning the implications of Gödel's semantic completeness theorem for other systems than Peano arithmetic, we are talking about advanced model theory, which is an enormous subject in itself. I am not sure about the answer, really. I have never looked into it.

I find the Curry-Howard Correspondence a little strange. I'm sure it makes sense, but likening axioms to a pre-execution invariant and theorems to a post-execution invariant appears complicated. It may have something to do with formal verification processes, but for me, the relationship between proofs and computation appears to be about enumeration of proofs by turing machines in one direction, and the generation of booleans on the Turing tape for the proven theorems after every inference step in the other direction.

In 1958 [Curry] observes that a certain kind of proof system, referred to as Hilbert-style deduction systems, coincides on some fragment to the typed fragment of a standard model of computation known as combinatory logic.[4]

In order to illustrate this understanding, it is clearly a question of producing good examples of Hilbert calculi as well as good examples of combinator calculi, along with an argument that maps an example in one formalism to an example in the other. It is a lot of work, but it should allow to develop a deeper understanding of the nitty-gritty details of what exactly Curry meant to say.

It is interesting to work on such explanation by examples, but then again, who else would be interested in reading the results? This discussion on the diagonal lemma is probably already more than what many potential readers would want to handle ...
• 78
ZFC is subject to the diagonal lemma, and therefore is syntactically incomplete. Hence, it must necessarily have undecidable statements that can be expressed in first-order logic.
My understanding from the definition was that for a theory to be decidable, it is necessary to have effective enumeration of its theorems, not to have a theorem for every statement or its negation. In that sense, I thought, whether ZFC is decidable is a separate subject matter from its syntactical incompleteness.

Concerning the implications of Gödel's semantic completeness theorem for other systems than Peano arithmetic, we are talking about advanced model theory, which is an enormous subject in itself.
I understand, but just to clarify. I wasn't so much curious about Gödel's semantic completeness directly as application with respect to other axiomatic systems, i.e. examples of models that satisfy a statement that is independent of the axioms. I was wondering whether there exists a definition of some kind of "minimal" class of models for every theory. Because, as it stands, I am under the impression that semantic incompleteness says nothing that the syntactic incompleteness and semantic completeness theorems together don't say. If such models were defined and coincided with our "intuitive" definition of interpretation of a theory (similarly to what denotational semantics offers for recursive programs), and Gödel's semantic incompleteness were shown to apply to such models (i.e. they always satisfied a statement that cannot be derived from the axioms), it would make the claim stand on its own.

It is interesting to work on such explanation by examples, but then again, who else would be interested in reading the results? This discussion on the diagonal lemma is probably already more than what many potential readers would want to handle ...
Absolutely. I will look into it on my own when I have the time. Logic is a past time interest for me, so I have gaps in my understanding at the moment. But I am trying to cure them, a piece at a time.

Thanks for the responses.
• 702
My understanding from the definition was that for a theory to be decidable, it is necessary to have effective enumeration of its theorems, not to have a theorem for every statement or its negation.

Effective axiomatization. A formal system is said to be effectively axiomatized (also called effectively generated) if its set of theorems is a recursively enumerable set (Franzén 2005, p. 112).This means that there is a computer program that, in principle, could enumerate all the theorems of the system without listing any statements that are not theorems. Examples of effectively generated theories include Peano arithmetic and Zermelo–Fraenkel set theory (ZFC).

Completeness. A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms (Smith 2007, p. 24).

The issue of syntactic completeness only arises in theories that have effective enumeration of its theorems:

The first incompleteness theorem states that no consistent system of axioms whose theorems can be listed by an effective procedure (i.e., an algorithm) is capable of proving all truths about the arithmetic of natural numbers. For any such consistent formal system, there will always be statements about natural numbers that are true, but that are unprovable within the system.

Hence, only theories with effective axiomatization can possibly be incomplete. Other theories are not even being considered in this context.
• 78
I am not intimately familiar with the subject, but I think there is some issue with the Wikipedia article.

There also may be linguistical confusion again. Many internet sources quote undecidability of PA, talking about sentences of PA's language that cannot be evaluated. The point is - sentences of the underlying language, not theorems of the theory, which implies that particular model semantics are involved. Otherwise, sentences that are not theorems (edit: themselves or their negations) don't have definite truth value in first-order theory, as per the completeness theorem. The wikipedia definition of decidability clearly talks about theorems only:
A theory (set of sentences closed under logical consequence) in a fixed logical system is decidable if there is an effective method for determining whether arbitrary formulas are included in the theory.

Further in the decidability article, it is clearly stated:
Decidability should not be confused with completeness. For example, the theory of algebraically closed fields is decidable but incomplete, whereas the set of all true first-order statements about nonnegative integers in the language with + and × is complete but undecidable.

Thus neither property implies the other. I should mention that I did confuse the issue somewhat, because recursive enumerability (i.e. effective axiomatization, semidecidability) is actually a weaker property, which does not answer negatively for statements not in the theory. Actual decidability requires negative and positive determination by effective method. Semidecidability is defined later in the same article. Even so, however, decidable, but incomplete theories, are also semidecidable, but incomplete ones. Thus, the statement holds that semidecidability does not imply completeness.

Edit: From reading on the web, I see that there is an important caveat. Every recursively enumerably axiomatizable theory is recursively enumerable. And every recursively enumerable syntactically complete theory is decidable. A complete theory may be undecidable, but for "sensible" theories (with effective axiomatization), completeness does imply decidability. So, as you stated, if ZFC were complete it would be decidable. But since it is not, does my original question - if it is decidable or not still stand? (That is, considering that it is effectively axiomatized, if that matters.)

Completeness. A set of axioms is (syntactically, or negation-) complete if, for any statement in the axioms' language, that statement or its negation is provable from the axioms (Smith 2007, p. 24).
From the definition you quoted, a complete axiomatic theory has a proof for either the statement or its negation. That does not mean that all such proofs or statements can be enumerated by a single effective procedure simultaneously. (PS. According to the previous edit I made, the answer is yes, for the more conventionally axiomatized theories.)

The summary of Gödel's first incompleteness theorem in the introduction to the Wikipedia article (which you quoted) seems to talk about validity ("capable of proving all truths"), which is not trademark of the first theorem, but the second one. And the account differs from the formulation in the body of the article:
First Incompleteness Theorem: "Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F." (Raatikainen 2015)
I reason, that for a statement to lack proof implies that it is not part of the axiomatization consequences. Not that it is part of the axiomatization consequences, but those cosequences cannot be computed.

Edit: Although, it appears that the theorem have been formulated for primitively recursively enumerable axiomatization originally. (According to this stackexchange remark.) According to the summary of one article I found, it appears to admit extension to non-enumerable theories.

And further in the article:
The first incompleteness theorem shows that the Gödel sentence GF of an appropriate formal theory F is unprovable in F. Because, when interpreted as a statement about arithmetic, this unprovability is exactly what the sentence (indirectly) asserts, the Gödel sentence is, in fact, true (Smoryński 1977 p. 825; also see Franzén 2005 pp. 28–33). For this reason, the sentence GF is often said to be "true but unprovable." (Raatikainen 2015). However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence GF may only be arrived at via a meta-analysis from outside the system.
This contradicts the summary, by clearly stating that validity is not the subject of the theorem. And finally, the following paragraph shows that the Gödel sentence is merely satisfiable in the standard model, not in every model:
Although the Gödel sentence of a consistent theory is true as a statement about the intended interpretation of arithmetic, the Gödel sentence will be false in some nonstandard models of arithmetic, as a consequence of Gödel's completeness theorem (Franzén 2005, p. 135).

Talking about the relationship between syntactic completeness and decidability, I think that there is some definitional ambiguity again. I think that it is sometimes meant to imply a procedure for evaluating (assigning truth value) to all sentences of the language of the theory (which requires syntactic completeness), or procedure that enumerates all statements satisfiable by the standard model (which isn't actually semantic completeness, because all first-order theories are semantically complete, but completeness with respect to a model).
• 702
So, as you stated, if ZFC were complete it would be decidable. But since it is not, does my original question - if it is decidable or not still stand?

I suspect that ZFC is not decidable, but then again, it really depends on the link between completeness and decidability. If there exists a procedure to solve the proof problem, the proof problem is decidable. That means that the theorem is provable from the theory. In that sense, provability is a decidability problem, because a proof is a procedure.

However, since the Gödel sentence cannot itself formally specify its intended interpretation, the truth of the sentence GF may only be arrived at via a meta-analysis from outside the system.

This contradicts the summary, by clearly stating that validity is not the subject of the theorem.

I think that it is the semantic completeness theorem that throws a spanner in the works: For a statement deemed true by the system, some proof must exist within the system. Therefore, the result would be contradictory, unless the truth of the Gödel sentence is evaluated outside the system. In my impression, the Gödel sentence is true in the metasystem but not provable in the system itself. One of the difficulties in this matter is that some reasoning takes place within the system, and some about the system, from outside the system.
• 78
I suspect that ZFC is not decidable, but then again, it really depends on the link between completeness and decidability. If there exists a procedure to solve the proof problem, the proof problem is decidable. That means that the theorem is provable from the theory. In that sense, provability is a decidability problem, because a proof is a procedure.
I see. Thinking about it, semidecidability I imagine will carry a lot of practical utility for applications like automated theorem proving, even though it doesn't guarantee termination. Decidability is probably more crucial to theoretic analysis.

I think that it is the semantic completeness theorem that throws a spanner in the works: For a statement deemed true by the system, some proof must exist within the system.
I actually don't know how the syntactic incompleteness was proven (through the same meta-logical argument as its semantic counterpart or not), but it seems that you shouldn't need a lot of semantics in order to demonstrate that some statement is independent. If the statement's derivation involves circularity (like the Gödel sentence), it should be entirely a deductive property if this circularity can be eliminated or not. But I might be wrong.
• 702
I actually don't know how the syntactic incompleteness was proven (through the same meta-logical argument as its semantic counterpart or not), but it seems that you shouldn't need a lot of semantics in order to demonstrate that some statement is independent. If the statement's derivation involves circularity (like the Gödel sentence), it should be entirely a deductive property if this circularity can be eliminated or not. But I might be wrong.

I am not familiar enough with Gödel's semantic completeness theorem. The explanation does not contain elaborate examples to illustrate the details of what they are talking about. It would be quite a job to inject examples in the right places.

For example:

But the ring Z of integers, which is not a field, is also a σf-structure in the same way. In fact, there is no requirement that any of the field axioms hold in a σf-structure.

A ring is "almost" a field -- it just lacks inverses (it is not closed under division) -- but that's the little I understand of why this ring is still a model (=σf-structure) of a field ... I hope that Wikipedia will manage to better elaborate on this subject.
• 78
I am not familiar enough with Gödel's semantic completeness theorem. The explanation does not contain elaborate examples to illustrate the details of what they are talking about. It would be quite a job to inject examples in the right places.
I am not familiar either, considering that I didn't understand its significance before yesterday. But from what I understand, it essentially states that inference and semantics are equivalent in a first order theory. If the theory knows something, it is true in all models (i.e. true for all interpretations consistent with the axioms), and if something is true in all models, the theory knows it. I don't find this surprising, as it makes intuitive sense. What I find more surprising is that it doesn't hold for higher order logics, as elaborated here:
The completeness theorem is a central property of first-order logic that does not hold for all logics. Second-order logic, for example, does not have a completeness theorem for its standard semantics (but does have the completeness property for Henkin semantics), and the set of logically-valid formulas in second-order logic is not recursively enumerable. The same is true of all higher-order logics. It is possible to produce sound deductive systems for higher-order logics, but no such system can be complete.

A ring is "almost" a field -- it just lacks inverses (it is not closed under division) -- but that's the little I understand of why this ring is still a model (=σf-structure) of a field ... I hope that Wikipedia will manage to better elaborate on this subject.
Actually, to my understanding, the signature is not a model, but the "embodied" part of the alphabet for the language of the theory - the rest being variables, quantifiers, logical symbols, are not specific or have no particular meaning. What the article states, if I read it correctly, is that the signature need not incorporate symbols, which taken as abstract mathematical objects, have semantics compatible with the models of the theory. This seems rather vacuous statement, considering that these are just symbols.

Edit: Thinking about it again, I probably overtrivialized the statement the article makes. It talks about the structure, not just the signature, which would include the interpretation and the domain as well. Considering their examples, you are right. And then indeed, it sounds a little non-sensical. But what they could mean is that an interpretation could assign 0 and 1 to their usual counterparts. And then addition, multiplication and inverses could map to relations that are subsets of Cartesian products of the integer set, but encode rationals using a pairing function. In other words, the article probably states that the function and relation symbols need not be mapped to their standard counterparts for the domain, but merely to functions and relations consistent with the theory.
Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal