• Possibility
    1.9k
    To point in the direction of the mop and say 'it is not that case that there is a Muppet in the mop cupboard' sounds like an example of the problem of counterfactual conditionals. People who are anxious about the metaphysical aspects of realism will argue that there are no negative facts and thus correspondence breaks down. This proposition about the mop cupboard doesn't seem to have any corresponding relation to objects and relation to objects in the world. Or something like that.Tom Storm

    It is when we exclude negative facts from realism that we limit the perception of truth in which we operate. That’s fine, as long as we recognise this when we refer to truth. Counterfactual conditionals are only a problem if we fail to recognise this limited perception of truth.

    The proposition ‘it is not the case that there is a Muppet in the mop cupboard’ is made from a six year old perception of truth, the limitations of which have been isolated from the proposition. A six year old would make a proposition in order to test conceptual knowledge, not to propose truth. A more accurate counterfactual conditional here (pointing in the direction of the mop) would be: ‘if it were not the case that there is a Muppet in the mop cupboard, then that would be a Muppet’. This clarifies the limited perception of truth in which the proposition operates, with correspondence intact.
  • Possibility
    1.9k
    The Problem Of The Criterion has, at its core, the belief that,

    1. To define we must have particular instances (to abstract the essence of that which is being defined)

    2. To identify particular instances we must have a definition

    The Problem Of The Criterion assumes that definitions and particular instances are caught in a vicious circle of the kind we've all encountered - the experience paradox - in which to get a job, we first need experience but to get experience, we first need a job. Since neither can be acquired before the other, it's impossible to get both.

    For The Problem Of The Criterion to mean anything, it must be the relationship between definitions and particular instances be such that in each case the other is precondtion thus closing the circle and trapping us in it.

    However, upon analysis, this needn't be the case. We can define arbitrarily (methodism) as much as non-arbitrarily (particularism) - there's no hard and fast rule that these two make sense only in relation to each other ss The Problem Of The Criterion assumes. I can be a methodist in certain situations or a particularist in others; there's absolutely nothing wrong in either case.
    TheMadFool

    The way I see it, the Problem of the Criterion is not just about defining concepts or identifying instances, but about our accuracy in the relation between definition and identification. The problem is that knowledge is not an acquisition, but a process of refining this accuracy, which relies on identifying insufficient structures in both aspects of the process.

    To use your analogy, the process of refining the relation between getting a job and getting experience relies on each aspect addressing insufficiencies in the other. To solve the problem of circularity, it is necessary to acknowledge this overall insufficiency, and to simply start: either by seeking experience without getting a job (ie. volunteer work, internship, etc) or by seeking a job that requires no experience (entry-level position or unskilled labour).

    In terms of identification and definition, it is necessary to recognise the overall insufficiency in our knowledge, and either start with an arbitrary definition to which we can relate instances in one way or another, or by identifying instances that will inform a definition - knowing that the initial step is far from accurate, but begins a relational process towards accuracy.
  • simeonz
    257
    This makes sense to me. Much of what you have written is difficult for me to follow, but I get the sense that we’re roughly on the same page here...?Possibility
    This reminds me of a Blackadder response - "Yes.. And no."

    I’m pointing out a distinction between the linguistic definition of a concept - which is an essentialist and reductionist methodology of naming consolidated features - and an identification of that concept in how one interacts with the world - which is about recognising patterns in qualitative relational structures.Possibility
    I think that according to your above statement, the technical definition of a class does not correlate to immediate sense experience, nor the conception from direct encounters between the subject and the object, nor to the recognition practices of objects in routine life. If that is the claim, I contend that technically exhaustive definitions are just elaborated countours of the same classes, but with level of detail that differs, because it is necessary for professionals that operate with indirect observations of the object. Say, as a software engineer, I think of computers in a certain way, such that I could recognize features of their architecture in some unlabeled schematic. A schematic is not immediate sense experience, but my concept does not apply to just appearances, but logical organization, so the context in which the full extent of my definition will become meaningful is not the perceptual one. For crude recognition of devices by appearances in my daily routine, I match them to the idea using a rough distilled approximation from my concept, drawing on the superficial elements in it, and removing the abstract aspects, which remain underutilized.

    If you are referring just to the process of identification, you are right, that if I see empty computer case, I will at first assume that it is the complete assembly and classify it is a computer. There is no ambiguity as to what a computer is in my mind, even in this scenario, but the evaluation of the particular object is based on insufficient information, and it is made with assumed risk. The unsuccessful application of weighted guesses to fill the missing features turn into an error in judgement. So, this is fuzzyness of the concept matching process, stemming from the lack of awareness, even though the definition is inаppropriate under consideration of the object's complete description.

    Another situation is, that if I am given a primitive device with some very basic cheap electronics in it, I might question if it is a computer. Here the fuzzyness is not curable with more data about the features of the object, because it results from the borderline evaluation of the object by my classifier. Hence, I should recognize that classes are nuances that gradually transition between each other.

    A different case arises when there is disagreement of definitions. If I see a washing machine, I would speculate that it hosts a computer inside (in the sense of electronics having the capacity for universal computation, if not anything else), but an older person or a child might not be used to the idea of embedded electronics and recognize the object as mechanical. That is, I will see computers in more places, because I have a wider definition. The disparity here is linguistic and conceptual, because the child or elderly person make crude first association based on appearances and then the resulting identification is not as good a predictor of the quality of the object they perceive. We don't talk the same language and our underlying concepts differ.

    In the latter case, my definition increases the anticipated range of tools supported by electronics and my view on the subject of computing is more inclusive classifier. The classification outcome predicts the structure and properties of the object, such as less friction, less noise. We ultimately classify the elements of the environment with the same goal in mind, discernment between distinct categories of objects and anticipation of their properties, but the boundaries depend on how much experience we have and how crudely we intend to group the objects.

    So, to summarize. I agree that sometimes the concept is indecisive due to edge cases, but sometimes the fuzzyness is in its application due to incomplete information. This does not change the fact that the academic definition is usually the most clearly ascribed. There is also the issue of linguistic association with concepts, I think that people can develop notions and concepts independently of language and communication, just by observing the correlations between features in their environment, but there are variables there that can sway the process in multiple directions and affect the predictive value of the concept map.
  • creativesoul
    9.7k
    This makes sense to me. Much of what you have written is difficult for me to follow, but I get the sense that we’re roughly on the same page here...?
    — Possibility
    This reminds me of a Blackadder response - "Yes.. And no."
    simeonz

    :smile:

    Get used to it with Possibility.
  • Kaiser Basileus
    38
    1. Which propositions are true/knowledge? [Instances of truth/knowldege]

    2. How can we tell which propositions are true/knowledge? [Definition of truth/knowledge]

    Knowledge is justified belief. What evidence counts as sufficient justification depends first upon the desired intent. Truth is an individual perspective on reality (consensus experience). This understanding is necessary and sufficient for all related epistemological questions and problems.

    universal taxonomy - evidence by certainty
    0 ignorance (certainty that you don't know)
    1 found anecdote (assumed motive)
    2 adversarial anecdote (presumes inaccurate communication motive)
    3 collaborative anecdote (presumes accurate communication motive)
    4 experience of (possible illusion or delusion)
    5 ground truth (consensus Reality)
    6 occupational reality (verified pragmatism)
    7 professional consensus (context specific expertise, "best practice")
    8 science (rigorous replication)
    -=empirical probability / logical necessity=-
    9 math, logic, Spiritual Math (semantic, absolute)
    10 experience qua experience (you are definitely sensing this)
  • Possibility
    1.9k
    So, to summarize. I agree that sometimes the concept is indecisive due to edge cases, but sometimes the fuzzyness is in its application due to incomplete information. This does not change the fact that the academic definition is usually the most clearly ascribed. There is also the issue of linguistic association with concepts, I think that people can develop notions and concepts independently of language and communication, just by observing the correlations between features in their environment, but there are variables there that can sway the process in multiple directions and affect the predictive value of the concept map.simeonz

    You seem to be arguing for definition of a concept as more important than identification of its instances, but this only reveals a subjective preference for certainty. There are variables that affect the predictive value of the concept map regardless of whether you start with a definition or identified instances. Language and communication is important to periodically consolidate and share the concept map across differentiated conceptual structures - but so that variables in the set of instances are acknowledged and integrated into the concept map.
  • Shawn
    10.8k
    Logical simples solves the issue. Just can't find em.
  • simeonz
    257
    You seem to be arguing for definition of a concept as more important than identification of its instances, but this only reveals a subjective preference for certainty. There are variables that affect the predictive value of the concept map regardless of whether you start with a definition or identified instances.Possibility
    That is true. I rather cockily answered "yes and no". I do partly agree with you. There are many layers to the phenomenon.

    I want to be clear that I don't think that a dog is defined conceptually by the anatomy of the dog, because it is inherently necessary to define objects by their structure. I don't even think that a dog can be defined conceptually exhaustively from knowing all the dogs in the world. It is rather, contrasted with all the cats (very roughly speaking). But eventually, there is some convergence, when the sample collection is so large that we can tell enough about the concept (in contrast to other neighboring concepts) that we don't need to continue its refinement. And that is when we arrive at some quasi-stable technical definition.

    There are many nuances here. Not all people have practical use for the technical definition, since their life's occupation does not demand it and they have no personal interest in it. But I was contending that those who do use the fully articulated concept, will actually stay mentally committed to its full detail, even when they use it crudely in routine actions. Or at least for the most part. They could make intentional exceptions to accommodate conversations. They just wont involve the full extent of their knowledge at the moment. Further, people can disagree on concepts, because of the extrapolations that could be made from them or the expressive power that certain theoretical conceptions offer relative to others.

    I was originally proposing how the process of categorical conception takes place by direct interactions of children or individuals, without passing of definitions second hand, or from the overall anthropological point of view. I think it is compatible with your proposal. Let's assume that people inhabited a zero dimensional universe and only experienced different quantities over time. Lets take the numbers 1 and 2. Since only two numbers exist, there is no need to classify them. If we experience 5, we could decide that our mental is poor, and classify 1 and 2 as class A, and 5 as class B. This now becomes the vocabulary of our mental process, albeit with little difference to our predictive capability. (This case would be more interesting if we had multiple features to correlate.) If we further experience 3, we have two sensible choices that we could make. We could either decide that all numbers are in the same class, making our perspective of all experience non-discerning, or decide that 1, 2, and 3 are in one class, contrasted with the class of 5. The distinction is, that if all numbers are in the same class, considering the lack of continuity, we could speculate that 4 exists. Thus, there is a predictive difference.

    In reality, we are dealing with vast multi-dimensional data sets, but we are applying similar process of grouping experience together, extrapolating the data within the groups, recognizing objects to their most fitting group and predicting their properties based on the anticipated features of the class space at their location.

    P.S.: I agree with your notion for the process of concept refinement, I think.
  • Possibility
    1.9k
    Not all people have practical use for the technical definition, since their life's occupation does not demand it and they have no personal interest in it. But I was contending that those who do use the fully articulated concept, will actually stay mentally committed to its full detail, even when they use it crudely in routine actions. Or at least for the most part. They could make intentional exceptions to accommodate conversations. They just wont involve the full extent of their knowledge at the moment. Further, people can disagree on concepts, because of the extrapolations that could be made from them or the expressive power that certain theoretical conceptions offer relative to others.simeonz

    I think I see this. A fully articulated concept is rarely (if at all) stated in its full detail - definitions are constructed from a cascade of conceptual structures: technical terms each with their own technical definitions constructed from more technical terms, and so on. For the purpose of conversations (and to use a visual arts analogy), da Vinci might draw the Vitruvian Man or a stick figure - it depends on the details that need to be transferred, the amount of shared conceptual knowledge we can rely on between us, and how much attention and effort each can spare in the time available.

    I spend a great deal of time looking up and researching terms, concepts and ideas I come across in forum discussions here, because I’ve found that my own routine or common-language interpretations aren’t detailed enough to understand the application. I have that luxury here - I imagine I would struggle to keep up in a face-to-face discussion of philosophy, but I think I am gradually developing the conceptual structures to begin to hold my own.

    Disagreement on concepts here are often the result of both narrow and misaligned qualitative structures or patterns of instances and their extrapolations - posters here such as Tim Wood encourage proposing definitions, so that this variability can be addressed early in the discussion. It’s not always approached as a process of concept refinement, but can be quite effective when it is.

    I will address the rest of your post when I have more time...
  • simeonz
    257
    For the purpose of conversations (and to use a visual arts analogy), da Vinci might draw the Vitruvian Man or a stick figure - it depends on the details that need to be transferred, the amount of shared conceptual knowledge we can rely on between us, and how much attention and effort each can spare in the time available.Possibility
    Yes, the underlying concept doesn't change, but just its expression or application. Although, not just in relation to communication, but also its personal use. Concepts can be applied narrowly by the individual for recognizing objects by their superficial features, but then they are still entrenched in full detail in their mind. The concept is subject to change, as you described, because it is gradually refined by the individual and by society. The two, the popularly or professionally ratified one and the personal one, need not agree, and individuals may not always agree on their concepts. Not just superficially, by how they apply the concepts in a given context, but by how those concepts are explained in their mind. However, with enough experience, the collectively accepted technically precise definition is usually the best, because even if sparingly applied in professional context, it is the most detailed one and can be reduced to a distilled form, by virtue of its apparent consequences, for everyday use if necessary.

    The example I gave, with the zero-dimensional inhabitant was a little bloated and dumb, but it aimed to illustrate that concepts correspond to partitionings of the experience. This means that they are both not completely random, because they are anchored at experience, direct or indirect, and they are a little arbitrary too, because there are multiple ways to partition the same set. I may elaborate the example at a later time, if you deem necessary.
  • Possibility
    1.9k
    The concept is subject to change, as you described, because it is gradually refined by the individual and by society. The two, the popularly or professionally ratified one and the personal one, need not agree, and individuals may not always agree on their concepts. Not just superficially, by how they apply the concepts in a given context, but by how those concepts are explained in their mind. However, with enough experience, the collectively accepted technically precise definition is usually the best, because even if sparingly applied in professional context, it is the most detailed one and can be reduced to a distilled form, by virtue of its apparent consequences, for everyday use if necessary.simeonz

    The best definition being the broadest and most inclusive in relation to instances. So long as we keep in mind that the technical definition is neither precise nor stable - only relatively so. Awareness of, connection to and collaboration with the qualitative variability in even the most precise definition is all part of this process of concept refinement.

    The example I gave, with the zero-dimensional inhabitant was a little bloated and dumb, but it aimed to illustrate that concepts correspond to partitionings of the experience. This means that they are both not completely random, because they are anchored at experience, direct or indirect, and they are a little arbitrary too, because there are multiple ways to partition the same set. I may elaborate the example at a later time, if you deem necessary.simeonz

    I’m glad you added this. I have some issues with your example - not the least of which is its ‘zero-dimensional’ or quantitative description, which assumes invariability of perspective and ignores the temporal aspect. You did refer to multiple inhabitants, after all, as well as the experience of different quantities ‘over time’, suggesting a three-dimensional universe, not zero. It is the mental process of a particular perspective that begins with a set of quantities - but even without partitioning the set, qualitative relation exists between these experienced quantities to differentiate 1 from 2. A set of differentiated quantities is at least one-dimensional, in my book.
  • simeonz
    257
    I’m glad you added this. I have some issues with your example - not the least of which is its ‘zero-dimensional’ or quantitative description, which assumes invariability of perspective and ignores the temporal aspect.Possibility
    Actually, there are multiple kinds of dimensions here. The features that determine the instant of experience are indeed in one dimension. What I meant is that the universe of the denizen is trivial. The spatial aspect is zero-dimensional, the spatio-temporal aspect is one-dimensional. The quantities are the measurements (think electromagnetic field, photon frequencies/momenta) over this zero-dimensional (one-dimensional with the time axis included) domain. Multiple inhabitants are difficult to articulate, but such defect from the simplifcation of the subject is to be expected. You can imagine complex communication would require more then a single point, but that breaks my intended simplicity.

    The idea was this - the child denizen is presented with number 1. Second experience during puberty is number 2. Third experience, during adolescence is number 5. And final experience during adulthood is number 4. The child denizen considers that 1 is the only possibility. Then, after puberty realizes that 1 and 2 both can happen. Depending on what faculties for reason we presume here, they might extrapolate, but lets assume only interpolation for the time being. The adult denizen encounters 5 and decides to group experiences in category A for 1 and 2 and category B for 5. This facilitates its thinking, but also means that it doesn't have strong anticipation for 3 and 4, because A and B are considered distinct. Then it encounters 3 and starts to contemplate, if 1, 2, 3, and 5 are the same variety of phenomenon with 4 missing yet, but anticipated in the future, or 1, 2, 3 are one group that inherits semantically A (by extending it) and 5 remains distinct. This is a choice that changes the predictions it makes for the future. If two denizens were present in this world, they could contend on the issue.

    This resembles a problem called "cluster analysis". I proposed that this is how our development of new concepts takes place. We are trying to contrast some things we have encountered with others and to create boundaries to our interpolation. In reality, we are not measuring individual quanta. We are receiving multi-dimensional data that heavily aggregates measurements, we perform feature extraction/dimensionality reduction and then correlate multiple dimensions. This also allows us to predict missing features during observation, by exploiting our knowledge of the prior correlations.
  • Possibility
    1.9k
    Ok, I think I can follow you here. The three-dimensional universe of the denizen is assumed trivial, and collapsed to zero. I want to make sure this is clear, so that we don’t invite idealist interpretations that descend into solipsism. I think this capacity (and indeed tendency) to collapse dimensions or to ignore, isolate and exclude information is an important part of the mental process in developing, qualifying and refining concepts.

    Because it isn’t necessarily just the time axis that renders the universe one-dimensional, but the qualitative difference between 1 and 2 as experienced and consolidated quantities. It is this qualitative relation that presents 2 as not-1, 5 as not-3, and 3 as closer to 2 than to 5. Our grasp of numerical value structure leads us to take this qualitative relation for granted.

    I realise that it seems like I’m being pedantic here. It’s important to me that we don’t lose sight of these quantities as qualitative relational structures in themselves. We can conceptualise but struggle to define more than three dimensions, and so we construct reductionist methodologies (including science, logic, morality) and complex societal structures (including culture, politics, religion, etc) as scaffolding that enables us to navigate, test and refine our conceptualisation of what currently appears (in my understanding) to be six dimensions of relational structure.

    How we define a concept relies heavily on these social and methodological structures to avoid prediction error as we interact across all six dimensions, but they are notoriously subjective, unstable and incomplete. When we keep in mind the limited three-dimensional scope of our concept definitions (like a map that fails to account for terrain or topology) and the subjective uncertainty of our scaffolding, then I think we gain a more accurate sense of our conceptual systems as heuristic in relation to reality.
  • Possibility
    1.9k
    In relation to the OP...

    From IEP: “So, the issue at the heart of the Problem of the Criterion is how to start our epistemological theorizing in the correct way, not how to discover a theory of the nature of truth.“

    The way I see it, the correct way to start our epistemological theorising is to acknowledge the contradiction at the heart of the Problem of the Criterion. We can never be certain which propositions are true, nor can we be certain of the accuracy in our methodologies to determine the truth of propositions. Epistemology in relation to truth is a process of incremental advancement towards this paradox - by interaction between identification of instances in experience and definition of the concept via reductionist methodologies.

    If an instance doesn’t refine either the definition or the methodology, then it isn’t contributing to knowledge. To focus our mental energies on calculating probability, for instance, only consolidates existing knowledge - it doesn’t advance our understanding, choosing to ignore the Problem and exclude qualitative variability instead of facing the inevitable uncertainty in our relation to truth. That’s fine, as long as we recognise this ignorance, isolation or exclusion of qualitative aspects of reality as part of the mental process. Because when we apply this knowledge as defined, our application must take into account the limitations of the methodology and resulting definition in relation to our capacity to accurately identify instances. In other words, we need to add the qualitative variability back in, or else we limit our practical understanding of reality - which is arguably more important. So, by the same token, if a revised definition or reductionist methodology doesn’t improve our experience of instances, thereby reducing prediction error, then it isn’t contributing to knowledge.
  • simeonz
    257

    Edit: Sorry for not replying, but I am in a sort of a flux. I apologize, but I expect that I may tarry awhile between replies even in the future.

    This is too vast a landscape to be dealt with properly in a forum format. I know this is sort-of a bail out from me, but really, it is a serious subject. I wouldn't be the right person to deal with it, because I don't have the proper qualification.

    The oversimplification I made was multi-fold. First, I didn't address hereditary and collective experience. It involved the ability to discern the quantities, a problem for which you inquired, and which I would have explained as genetically inclined. How genetics influence conceptualization and the presence of motivation for natural selection that fosters basic awareness of endemic world features need to be explained. Second, I reduced the feature space to a single dimension, an abstract integer, which avoided the question of making correlations and having to pick most discerning descriptors, i.e. dimensionality reduction. I also compressed the object space, to a single point, which dispensed with a myriad of issues, such as identifying objects in their environment, during stasis or in motion, anticipation of features obscured from view, assessment of orientation, assessment of distance.

    The idea of this oversimplification was merely to illustrate how concepts correspond to classes in taxonomies of experience. And in particular, that there is no real circularity. There was ambiguity stemming from the lack of unique ascription of classes to a given collection of previously observed instances. Such as in the case of 3, there is inherent inability to decide whether it falls into the group of 1 and 2, or bridges 1 and 2 with 5. However, assigning 1 and 3 to one class, and 2 and 5 to a different class would be solving the problem counter-productively. Therefore, the taxonomy isn't formed in arbitrary personal fashion. It follows the objective of best discernment without excessive distinction.

    No matter what process actually attains plausible correspondence, what procedure is actually used to create the taxonomy, no matter the kind of features that are used to determine the relative disposition of new objects/samples to previous object/samples and how the relative locations of each one is judged, what I hoped to illustrate was that concepts are not designed so much according to their ability to describe common structure of some collection of objects, but according to their ability to discriminate objects from each other in the bulk of our experience. This problem can be solved even statically, albeit with enormous computational expense.

    What I hoped to illustrate is that concepts can both be fluid and stable. New objects/impressions can appear in previously unpopulated locations of our experience, or unevenly saturate locations to the extent that new classes form from the division of old ones, or fill the gaps between old classes, creating continuity between them and merging them together. In that sense, the structure of our concept map is flexible. Hence, our extrapolations, our predictions, which depend on how we partition our experience into categories with symmetric properties, change in the process. Concepts can converge, because experience, in general, accumulates, and can also converge. The concepts, in theory, should gradually reach some maximally informed model.

    Again, to me, all this corresponds to the "cluster analysis" and "dimensionality reduction" problems.

    You are correct, that I did presuppose quantity discernment and distance measurement (or in 1-D difference computation). The denizen knows how to deal with the so called "affine spaces". I didn't want to go there. That opens an entirely new discussion.

    Just to scratch the surface with a few broad strokes here. We know we inherit a lot genetically, environmentally, culturally. Our perception system, for example, utilizes more then 5 senses that we manage to somehow correlate. The auditory and olfactory senses are probably the least detailed, being merely in stereo. But the visual system starts with about 6-million bright illumination photoreceptor cells and many more low illumination photoreceptor cells, unevenly distributed on the retina. Those are processed by a cascade of neural networks, eventually ending in the visual cortex and visual association cortex. In between, people merge the monochromatic information from the photoreceptors into color spectrum information, ascertain depth, increase the visual acuity of the image by superimposing visual input from many saccadic eye movements (sharp eye fidgeting), discern contours, detect objects in motion, etc. I am no expert here, but I want to emphasize that we have inherited a lot of mental structure in the form of hierarchical neural processing. Considering that the feature space of our raw senses is in the millions of bits, having perceptual structure as heritage plays a crucial role in our ability to further conceptualize our complex environment, by reinforcement, by trial and error.

    Another type of heritage is proposed by Noam Chomsky. He describes, for which there is apparently evidence, that people are not merely linguistic by nature, but endowed with inclinations to easily develop linguistic articulations of specific categories of experience in the right environment. Not just basic perception related concepts, but abstract tokens of thought. This may explain why we are so easily attuned to logic, quantities, social constructs of order, pro-social behaviors, like ethical behaviors, affective empathy (i.e. love) etc. I am suggesting that we use classification to develop concepts from individual experience. This should happen inside the neural network of our brain, somewhere after our perception system and before decision making. I am only addressing part of the issue. I think that nature also genetically programs classifiers in the species behavior, by incorporating certain awareness of experience categories in their innate responses. There is also the question of social Darwinism. Because natural selection applies to the collective, the individuals are not necessarily compelled to identical conceptualization. Some conceptual inclinations are conflicting, to keep the vitality of the community.
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment