This is not true at all. It is generally proposed in metaphysics, and supported by evidence, that a whole is greater than the sum of its parts. There is a logical fallacy, the composition fallacy, which results from what you propose.
I was speaking to identity in that post. The words "parts and wholes" is misleading there. When we say an object has the trait of being a triangle, or that it instantiates the universal of a triangle, we aren't referring to any one of its angles, right?
The concept of emergence and the composition fallacy doesn't apply to bundle theories of identity. A "trait" is not a stand in for a part of an object. For example, traits aren't parts in the sense that a liver is a part of a human body or a retina is part of an eye.
A trait - that is a trope (nominalism) or the instantiation of a universal (realism) - applies to the emergent whole of an object. They have to do so to serve their purpose in propositions. For example, the emergent triangularity of a triangle is a trait. The slopes of the lines that compose it are not traits, they are parts (they interact with traits only insomuch as they effect the traits of the whole). The way I wrote that was misleading, but the context is the identity of indiscernibles.
Traits are what allow propositions like "the bus is red," or "the ball is round" to have truth values. The sum total of an object's traits is not the sum total of its parts. It is the sum of all the predicates that can be attached to it. So an object that is "complex" but which is composed of "simple" parts still has the trait of being complex.
So to rephrase it better, the question is "is a thing defined by the sum of all the true propositions that can be made about it, or does it have an essential thisness of being unique to it?"
And this is nothing but nonsense. What could a "substratum of 'thisness'" possibly refer to? "
Yes, that is the common rebuttal I mentioned. It sounds like absurd gobbledygook. Now, its supporters claim that all ontologies assert ontologically basic brute facts, and so this assertion is no different, but it seems mighty ad hoc to me. That this theory still has legs is more evidence of the problems competitors face than its explicit merits.
You attempt to make the description, or the model, into the thing itself. But then all the various problems with the description, or model, where the model has inadequacies, are seen as issues within the thing itself, rather than issue with the description.
This sort of "maps versus territory" question begging accusation is incredibly common on this forum. It's ironic because in the context it is normally delivered, re: mental models of real noumena versus the real noumena in itself, it is itself begging the question by assuming realism.
As a realist, I still take the objection seriously, but I'm not totally sure how it applies here.
This is not really true. Time is a constraint in thermodynamics, but thermodynamics is clearly not the ground for time, because time is an unknown feature. We cannot even adequately determine whether time is variable or constant. I think it's important to understand that the principles of thermodynamics are applicable to systems, and systems are human constructs. Attempts to apply thermodynamic principles to assumed natural systems are fraught with problems involving the definition of "system", along with attributes like "open", "closed", etc..
The "thermodynamic arrow of time," refers to entropy vis-á-vis the universe as a whole. Wouldn't this be a non-arbitrary system.
I agree with the point on systems otherwise. I don't think I understand what "time is an unknown feature," means here. Is this like the "unknown features" of machine learning?
This is a mistaken notion which I commonly see on this forum. Definition really does not require difference. Definition is a form of description, and description is based in similarity, difference is not a requirement, but a detriment because it puts uncertainty into the comparison. So claiming that definition requires difference, only enforces my argument that this is proceeding in the wrong direction, putting emphasis on the uncertainty of difference rather than the certainty of sameness. A definition which is based solely in opposition (difference), like negative is opposed to positive for example, would be completely inapplicable without qualification. But then the qualification is what is really defining the thing that the definition is being applied to.
The difference/similarity distinction is two heads of the same coin. I start with difference only because Hegel did and that's where my thinking was going.
If you start with the idea of absolute, undifferentiated being, then difference is the key to definition. If you start with the idea of pure indefinite being, a chaotic pleroma of difference, then yes, similarity is the key principal.
Hegel used both. In the illustration from sense certainty, we face a chaotic avalanche of difference in sensations. The present is marching ever forward so that any sensation connected to the "now" of the present is gone before it can be analyzed. This pure unanalyzable difference is meaningless. The similarities between the specific moments of conciousness help give birth to the analyzable world, a world of schemas, categories, and traits. However, these universals (in the generic, not realist sense) in turn shape our perception (something you see borne out in neuroscience). So the process of meaning is a circular process between specifics and universals, difference and similarity.
You cannot make definitions if all you have access too is absolute difference or absolute similarity. Similarity alone cannot make up a definition. As Sausser said, "a one word language is impossible." If one term applies to everything equally, with no distinction, it carries no meaning. In the framework of Shannon Entropy, this would be a channel of nothing but infinite ones or infinite zeros. There is zero surprise in the message.
For instance, you can define green things for a child by pointing to green things because they also see all sorts of things that aren't green. If, in some sort of insane experiment, you implant a green filter in their eyes so that all things appear to them only in shades of green, they aren't going to have a good understanding of what green is. Green for them has become synonymous with light, it has lost definition due to lack of differentiation.
The interesting thing is that this doesn't just show up in thought experiments. Denying developing mammals access to certain types of stimuli (diagonal lines for instance) will profoundly retard their ability to discriminate between basic stimuli when they are removed from the controlled environment in adulthood.
Can you explain how these folks get around the issues mentioned above though? The ones I am familiar with in this list have extremely varied views on the subject.
Nietzsche's anti-platonist passages are spread out, and I'm not sure they represent a completed system, but he would appear to fall under the more austere versions of nominalism I talked about. Like I said, these avoid the problem of the identity of indiscernibles, but at the cost of potentially jettisoning truth values for propositions.
Rorty is a prime example of the linguistic theories I mentioned. The complaint here is again about propositions. Analytical philosophers don't want to make propositions just about the truth values of verbal statements. Plus, many non-analytical philosophers still buy into realism, at least at the level of mathematics (the Quine–Putnam indispensability argument re: abstract mathematical entites have ontic status).
On a side note, I honestly find it puzzling that eliminativists tend to like Rorty. Sure, it helps their claims about how lost we are in assuming conciousness has the depth we "think," we experience, but it also makes their work just statements about linguistic statements.
I am familiar with Deleuze and to a lesser extent Heidegger on this subject. I have never seen how moving from ontological identity to ontological difference independent of a concept of identity fixes the problem of the identity of indiscernibles. It seems to me that it "solves" the problem by denying it exists.
However, it does so in a way that makes me suspicious of begging the question. Sure, difference being ontologically more primitive than identity gets you out of the jam mentioned above by allowing you to point to the numerical difference of identical objects as ontologically basic, but it's always been unclear to me how this doesn't make prepositions about the traits of an object into mere brute facts. So in this sense, it's similar to the austere nominalism I was talking about before.
Now I think Putnam might have something very different to say here vis-á-vis the multiple realizability of mental states, and what this says about the traits of objects as experienced, verses their ontological properties, but this doesn't really answer the proposition problem one way or the other. It does seem though, like it might deliver a similar blow to propositions as the linguistic models (e.g., Rorty). Propositions' truth values are now about people's experiences, which I suppose is still a step up from being just about people's words or fictions, and indeed should be fine for an idealist.