Well, because he has to live with those memories, even if we perceive him as a different person, he still is held accountable by his own psychology- his punishment is the resultant guilt and regret.So will the man who is in discontentment, whom has changed in that 30 years, be the same person who did those traits in the last 30 years?
The interesting part is will that transition of 30 years change that man entirely and perhaps not be held responsible for his past traits. If he is content for inhumane actions and immoral traits, wherein the people who criticizes him are not, is that ethically moral? is that, good?
↪aporiap Interesting. I suppose the etiquette that I would advocate is being honest. If someone expresses an offensive view, then the correct response is to refute it rather than try to censor it. Generally the gross stereotyping of which you speak is wrong and can be refuted. However artificial blindness is not the solution. Artificial blindness and bigotry feed into one another. A man notices a social phenomenon and proposes a typically false explanation; the academic says "this is a stereotype" or "this is a generalization," the man looks at it again and says that it is definitely going on, so he decides that the academic is full of crap and goes on with his bigoted explanation. The real way to deal with this is to realize that, if something exists at a rate greater than chance, then there must be a reason for it, although it is usually not the reason that you expect. So the generalizations and stereotypes should be used as grounds for further research to find an actual explanation. — Ilya B Shambat
I provided the alternatives to demonstrate that negative utilitarianism is itself just one of many theories, and that the antinatalist position depends on it. If bad does not necessarily equal suffering, then you cannot simply make the claim that we are obligated to prevent suffering. What makes us obligated here is the fact that suffering is considered bad. The implicit premise is that (1) we prevent something because it is bad (2) suffering is bad. If suffering is not itself intrinsically bad, there's no obligation.All of these schemas you mentioned not needed if people were not born. These are after-the-fact positions. A non-existent entity doesn't need to manage passions or self-actualize if not born. To be born in order to do these things would be using someone for this agenda, which seems odd to me. Like a journey that is inevitable for someone that didn't in fact have to be forced on that journey.
Firstly, as I've said before, I think you're discounting that negative hedonic utilitarianism [the basis for the whole anti-natalist position] is itself a cultural construct. You'd be committing a naturalistic fallacy if you think just because suffering is uncomfortable it is forthrightly bad, and thus an unborn person is better in that state because it prevents him from suffering.Again, this doesn't make sense in the light that no one inevitably has to exist to experience anything in the first place. This is all after-the-fact of already being procreated and then trying to find cultural values to buy into to make do. First the schema needs to be agreed to be right by the individual, and then it is carried forthwith. Of course various individual personalities and temperaments may find these schemas not for them and switch to other ones. Or, the person simply falls into modern default mode- cobbling together the various cultural environs and values immediately at hand (pragmatic hedonism if you will the modern "default mode" of most).
I can take the 'obligation to prevent suffering' to absurd lengths as well. Why do anything at all, knowing that moving from my comfortable bed now will inevitably lead to discomfort [suffering]? Why walk down 5th avenue or drive a car when you are both putting yourself in a less relaxed state and making yourself at risk for being hurt in an accident or hit by a meteor? Sure they can lead to pleasures, but this isn't necessary and we are nevertheless obligated to proactively prevent suffering whenever possible, so in fact we really shouldn't even leave the house.Also, this projected feeling of "missing out" for the as yet not existing person, can also be taken to absurd lengths. If taken to the logical extreme then we can say the billions and trillions of yet to be born people are missing out. But that is silly. Even more absurd would be that it is people's duty to those billions of non-existent people to keep having more people to reduce those non-existent people's "pain" of not existing and missing out. Obviously that makes no sense.
I think my main problem with the argument is that bad/good ascriptions are not necessarily applicable to suffering or pleasure in themselves. Badness or goodness are separable from hedonic states. They should be defined in reference to some goal or [in the general human sense] with respect to whether something leads one closer to 'well being' or whether it leads them away from that. That makes intuitive sense from the utilitarian position [the good is a goal to which we reach, things are good if they result in the good], even in the case of a hedonic utilitarianism [which I assume is Barren and your position] where what's good is anything that minimizes suffering [your goal]. But that's just one utilitarian theory. Badden's argument would fail if you take anything else as 'the good', which many people do [spinoza's good is attaining freedom by managing passions; maslow's self actualization; societal stability; etc]. And even from the hedonic position, I simply disagree with his contention that there's an assymetry. I actually think many people do think the lack of an ability to experience pleasure [hell, even experiencing at all] is a wrong - it's what motivates my friend to get on my ass about not putting myself out of my comfort zone - because apparently I'm missing out. He [and other friends] feel obligated to push and challenge me, I'm sure you've had friends do the same. They are clearly operating under utilitarian assumption - that I'm not experiencing as much pleasure as I could because I'm limiting myself... a potential human would be limited in just the same way. Would you not say they intuitively feel missing out is a wrong in itself? If so then how is intuition alone enough to justify the asymmetry?His argument takes the negative utilitarian idea extremely seriously. That is to say, harm is what matters, not pleasure. To restate this in a normative structure- potential parents are not obligated to bring someone who experiences joy/pleasure/positive value into the world. However, potential parents are obligated to prevent inevitable harms from occurring. One of his arguments comes from intuition. We don't usually feel pangs of compassionate sadness for the aliens not born to experience pleasure in a far away barren planet. We would most likely feel compassionate sadness, on the other hand, if we learned that aliens in a far away planet were born and were suffering. Suffering seems to matter more than bringing about pleasure in the realm of ethical decision-making. When prevention of all suffering is a guarantee and no actual person loses out on pleasure, this seems a win/win scenario.
Well I think you mean uselessly or needlessly suffer here. I do not think people would agree with the bold if that suffering resulted in a net positive. If you restrict it to needless suffering then you would not get to an antinatalist position, unless you're in a situation where you can guarantee your child will uselessly suffer [you're pregnant in a concentration camp with no foreseeable chance to escape].The main point is that in the procreational decision, there is an asymmetry as to the absence of an actual person in regards to an absence of suffering and pleasure. It is always good that someone did not suffer, even if there is no actual person to be around to know this or enjoy the not suffering. It is not bad (or good) if someone does not experience pleasure, unless there was an actual person who was around to be deprived.
The way you're framing it makes it sound wrong. Nobody gives birth to force someone to experience adversity, this is different from the [inevitable] fact that they will face adversity. And, having the knowledge that your child will face adversity should be placed on equal value-ground as having the knowledge that your child, in existing, will experience pleasure. Else there's a double-standard.Also, just in general, forcing someone else into existence to experience some form of adversity to get stronger is still wrong. It's like forcing someone into an obstacle course they did not ask for, and can never leave without killing themselves. Well, I guess it's okay to stay and try and do the best, but it was not necessarily good to give that obstacle course in the first place. No one needs to do anything prior to birth, being that, as you pointed out, there is no actual person before birth who needed to go through life in the first place, good, bad, or ugly. By not having the person, it is no harm, no foul.
What makes you think it is not necessary to go through? I mean, fundamentally, the sort of satisfaction and enjoyment you get from enduring through a struggle is made what it is by the suffering. If you were given a nobel prize for completely nothing, you would be missing out on something that Einstein wouldt've - the satisfaction becomes not just enhanced but partly made up of feelings of self-validation [i.e. that you really were able to do it] self-satisfaction and accomplishment. And these feelings don't simply just get forgotten, they're embedded in the entire experience which is impressed in memory and accessible in mind.That's one of the reasons I'm an antinatalist though. Why put someone through the struggle (even for some sort of enlightenment) if they didn't have to go through it in the first place.
Well so conferring new value via re-purposing is something different than instrinsic purpose/teleology. Are you implying here that the ends of things [e.g. the end of an enzyme - to catalyze reaction, the end of a seed is to become a plant] are human designated?I don't think that the idea that agents act for ends requires that they only act for one end.
Also, I think part being a free agent is our ability to confer new value by re-purposing objects and capabilities. It is part of what Aquinas calls our participation in Divine Providence by reason. That is why I object to a narrow natural law ethics that does not allow for the legitimate creation of new ends.
I don't understand this since we are speaking about objects here and not people. I also think, if anything, a teleological framework would necessarily be limiting compared to a teleologically blank humanity since it rigidly identifies some set of ends as natural to an object/person. Humans wouldn't have the freedom to not self realize if their nature was to self-realize, for example.Of course. That is one reason free will is possible. There are multiple paths to human self-realization.
I'm unsure what free will has to do with teleology. Secondly this is a human specific thing, free will doesn't have anything to do with physical systems, they cannot choose actions because they lack brainsThis has to do with physical determinism vs. intentional freedom. If no free agent is involved, physical systems have only a single immanent line of action and so act deterministically. If there are agents able to conceive alternative lines of action, then multiple lines of action are immanent in the agents, and so we need not have deterministic time development.
Well my point in that excerpt was to just highlight that ends are not intrinsic to objects alone. A gene, for example, can NOT give rise to a protein all by itself, despite the function [or end] of a gene being to give rise to a protein. It's the gene plus the cellular machinery which gives rise to a protein.Are you thinking that the existence of ends entails determinism? I don't.
I read through the OP again to just clarify why this is an issue [finding some way of characterizing everything] and it seems like the reason you focus in on monism here is essentially because of the plainly clear plurality we experience. But I also think it's plainly clear that commonalities underpin pluralities; and these form the basis for the search for unifying principles and substances.My critique - but it's too shoddy to be dignified by that title - is a 'critique' of any philosophical operation that tries to find some way of characterizing 'everything.' I think that any feature of the world is capable of having an ontological status applied to it.
I think you're right - and that anyone would agree that both were composed of legos. Both the bridge and the man are decomposable, in the sense you mentioned. But what is the face-value distinction - the 'looking different' - composed of? And is it decomposable? What is the 'ephemeral'?
I don't think their different arrangement should matter that much. Spatial and bonding relations, which serve as the basis for difference between objects, are not fundamental or substantial, they just are ephermal states. e.g. You have a bunch of lego blocks and build a bridge and man out of it. Sure, they look different -on face value- and that's because of how you've arranged the blocks, but I don't think anyone would say the man or the bridge are their own separate substances, no the substance is the thing which is invariant and composes them.Here's the philosophical 'gotcha'. But I think it's legitimate.
Trees and bananas are, obviously, composed of physical particles, differently. No 'face-value' differentiation required. So how do you decompose the difference between the face-value difference of trees and bananas - and trees and bananas with no face-value difference.
You seem savvy, so I'm sure you anticipate this kind of thing. Nevertheless - how?
I am unsure how it's just a conceptual operation. To say reality is composed of one substance means something. You assert that all objects are decomposable or analyzable into fundamental units which, at a certain level, are indistinguishable and interchangeable with one another. Take a carbon atom from joe and a carbon atom from the tree outside your window, and, if you look at them, you will find no fundamental difference, no individual footprint. Take the electron from carbon 1 and you can replace it with the electron from carbon atom 2.I agree with this. The 'fact of the matter' is surely independent of conceptualization. But to determine whether our metaphysical ideas correspond with reality (if such a thing is possible), we have to figure out what we're saying. What does 'reality is composed of one substance' mean?
What I'm suggesting is that, if you break down the concept, you see that its not about the world at all. It's a conceptual operation that has overstepped its bounds. My interest in this topic corresponds exactly to my feeling 'the fact of the matter' shouldn't depend on conceptualization.
(We could also say the fact of the matter about whether reality is large shouldn't depend on conceptualization. That's true, I think. But it's not really clear what it would mean. It seems to be mixing something up. )
Fundamental means not decomposable. A quark or boson, more fundamentally -- the properties which define 'quark' or 'boson'. These properties -charge, mass, etc.], would be fundamental.What does 'fundamental' mean?
Is face-value difference fundamental?
Why or why not?
I jumped the gun by saying they are empirically supported. But as you can see I didn't conjure them up! The more accurate thing to say is that there are neurons in higher-level brain regions which fire selectively to seemingly abstract stimuli. Whether that indicates they fire in response to a given 'concept' or in response to a given feature shared between all those stimuli [e.g. the presence of 'almond-shapes' eyes] or some other feature coincidentally related to the 'category' of the stimuli presented is not known.No, they are hypothetical. Your Wikipedia reference says "The grandmother cell, sometimes called the "Jennifer Aniston neuron," is a hypothetical neuron that represents a complex but specific concept or object."
The support cited in the article is behavioral (which is to say physical), with no intentional component. I am happy to agree that behavioral events require the firing of specific neural complexes. The problem is, a concept is not a behavior, but the awareness of well-defined notes of intelligibility. The article offers no evidence of awareness of the requisite sort.
So, something like aristotelian realism about universals? Well that would make them more than a mere insignificant mental abstraction, it's a real thing in the world by your take, albeit inextricably linked to the particular. I'm not familiar with terms like 'notes of comprehension' or 'essential notes'. You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension. Can you unpack this term?It is Moderate Realism, which sees universal concepts grounded in the objective character of their actual and potential instances rather than in Platonic Ideas or Neoplatonic Exemplars. Nominalism and conceptualism see universals as categories arbitrarily imposed by individual fiat or social convention.
I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.] As soon as the instruction is removed, thus it ceases being a sampler and, suddenly would become a sampler [because it now has the capacity to sample] once the instruction is re-introduced. Even though it runs through the entire program in the thought experiment, during the period when the instruction is removed, the program is in a state where it no longer has the looping/iterative-sampling capacity hence the fact that it is not a sampler during that period.No. Notice that we run all the original instructions. Any program that simply runs an algorithm runs it completely. So, your 'atmospheric sampler' program does everything needed to complete its computation.
What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.The problem is, we have no reason to assume that the generation of consciousness is algorithmic. Algorithms solve mathematical problems -- ones that can be presented by measured values or numerically encoded relations. We have no such representation of consciousness. Also, data processing operates on representations of reality, it does not operate on the reality represented. So, even if we had a representation of consciousness, we would not have consciousness.
These choices are not exhaustive.. Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order. It's okay for some of the frames to be skipped because the viewer can infer the scene from the adjacent frames. In this instance the program is a movie player not because of the mere presence of the instructions nor because of the output of one or another frame [be it the middle frame or the last frame]. It also couldn't just result from only some of the instructions running, it requires them all to run properly for at least most [a somewhat arbitrary, viewer-dependent number] of the iterations so that enough frames are output for the viewer to see some semblance of a movie. In this case it's not the output of one loop that results in consciousness nor the output of some pre-specified number of sequential iterations that results in the program being a movie player. Instead it is a combination of a working program and some number of semi-arbitrary and not-necessarily sequential outputs which result in the program being a movie player. This is not even a far-out example, it's easy to imagine a simple, early american projector which operates via taking film-strip.. Perhaps sections of the film-strip are damaged which leads to inadequate projection of those frames. Would you say this projector is not a movie-player if you took out one of its parts before it reached the step where it's needed and then impossibly becomes a movie-player once the part is re-introduced right before it was needed?In the computational theory of mind, consciousness is supposed to be an emergent phenomenon resulting from sufficiently complex data processing of the right sort. This emergence could be a result of actually running the program, or it could be the result of the mere presence of the code. If it is a result of running the program, it can't be the result of running only a part of the program, for if the part we ran caused consciousness, then it would be a shorter program, contradicting our assumption. So, consciousness can only occur once the program has completed -- but then it is not running, which means that an inoperative program is causes consciousness.
I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.We are left with the far less likely scenario in which the mere presence of the code, running or not, causes consciousness. First, the presence of inoperative code is not data processing, but the specification of data processing. Second, because the code can be embodied in any number of ways, the means by which it effects consciousness cannot be physical. But, if it can't be physical, and it's not data processing, what is the supposed cause?
I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described? Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality. I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.No, not at all. It only depends on the theorem that all finite state machines can be represented by Turing machines. If we are dealing with data processing per se, the Turing model is an adequate representation. If we need more than the Turing machine model, we are not dealing with data processing alone, but with some physical property of the machine.
I agree that the brain uses parallel processing, and might not be representable as a finite state machine. Since it is continually "rewiring" itself, its number of states may change over time, and since its processing is not digital, its states may be more continuous than discrete. So, I am not arguing that the brain is a finite state machine. I am arguing against those who so model it in the computational theory of mind.
I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem. Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] . In this case it's the relations between the parts which result in the phenomenon cannot be reducible to simply the parts. I'd imagine there's some sort of way you can account for consciousness by the interactions of the component neurons in the systemThis assumes facts not in evidence. David Chalmers calls this the "Hard Problem" because not only do we have no model in which a conglomerate of neurons operate to produce consciousness, but we have no progress toward such a model. Daniel Dennett argues at length in Consciousness Explained that no naturalistic model of consciousness is possible.
Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentiallyIt is also clear that a single physical state can be the basis for more than one intentional state at the same time. For example, the same neural representation encodes both my seeing the cat and the cat modifying my retinal state.
Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain. I think this is the only difference we have. I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain."Dichotomy" implies a clean cut, an either-or. I am not doing that. I see the mind, and the psychology that describes it, as involving two interacting subsystems: a neurophysical data processing subsystem (the brain) and an intentional subsystem which is informed by, and exerts a degree of control over, it (intellect and will). Both subsystems are fully natural.
There is, however, a polarity between objects and the subjects that are aware of them.
I'm not entirely familiar with the Kantian thesis here, but I think the fact that our physical models [and that the entities within the models] change with updated evidence and the fact that fundamental objects seem to hold contradictory properties - wave-particle nature imply that theoretical entities like the 'atom' etc are constructs. Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.Please rethink this. Kant was bullheaded in his opposition to Hume's thesis that there is no intrinsic necessity to time ordered causality. As a result he sent philosophy off on a tangent from which it is yet to fully recover.
The object being known by the subject is identically the subject knowing the object. As a result of this identity there is no room for any "epistic gap." Phenomena are not separate from noumena. They are the means by which noumena reveal themselves to us.
We have access to reality. If we did not, nothing could affect us. It is just that our access is limited. All human knowledge consists in projections (dimensionally diminished mappings) of reality. We know that the object can do what it is doing to us. We do not know all the other things it can do.
We observe everything by its effects. It is just that some observations are more mediated than others.
I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.This is very confused. People have learn about themselves by experiencing their own subjectivity from time immemorial. How doe we know we are conscious? Surely not by observations of our physical effects. Rather we know our subjective powers because we experience ourselves knowing, willing, hoping, believing and so on.
They're empirically supported, I didn't conjure these things up: https://en.m.wikipedia.org/wiki/Grandmother_cell . It's a set of spatially distributed, sparsely firing neurons which activate when particular category of object - faces, hands, etc. - are presented irregardless of form of perception (whether it's the name 'face' that is heard, an image of a face seem, a face that is felt).A key position in the Age of Reason was the rejection of "occult" properties, but here you are positing "concept cells" as a cover for abject ignorance of how any purely material structure can explain the referential nature of intentional realities. Where are these concept cells? How do they work? They are as irrational and ungrounded as the assumption of homunculi.
I think my issue stems from not being able to separate 'ontological independence' from logical orthogonality. I mean to assert that concepts and intentions exist and are distinct from their material instances and yet to then say these things are somehow still of same ontological type [i.e. physical] as physical objects seems difficult to reconcile [what makes them physical if they're not composed of or caused by physical material?]. It just seems like an unsubstantiated assertion that they are ontologically the same.That is not what I explained that I mean by concepts being orthogonal. I explicitly said, "... logically orthogonal. That is to say, that, though they co-occur and interact, they do not share essential, defining notes." Having non-overlapping sets of defining notes makes concepts orthogonal -- not the consideration of interactions in their instances, which is a contingent matter to be resolved by reflecting on experience.
Intentional states inform physical states but I mentioned before [and I think this is important] that this is always by virtue of a physical-material mechanism. There is activity pattern in higher level areas of brain which trickle down via some series of physical communication signals into a pattern of behavior. The 'seeming' ontological jump from intentional state [not-physical] to physical change in muscle activity is what I argue never happens because there must ultimately be some physical nature to that intentional state in order for it to lead to a physical change.Concepts are abstractions and do not "interact." All that concepts do (their whole being) is refer to their actual and potential instances. Still, it is clear to all but the most obdurate ideologues, that intentionality can inform material states. Whenever we voice a concept, when we speak of our intentions, our speech acts are informed by intentional states. Conversely, in our awareness of sensory contents, material states inform the resulting intentional states. So, the fact that intentional and material abstractions are orthogonal does not prevnt material and intentional states from interacting.
Again, I can't think of how this could happen without a physical mechanism. And in fact it is currently made sense of in terms of physical mechanisms [albeit coarse grained and drafted at present] - as a hypothetical mechanism: some web of 'concept-cells' [higher level cells in a feedforward neural circuit that invariantly fire in response to a very specific stimulus or object class] are activated in conjunction with reward circuitry and a motor-command sequence is initiated.This misses the fact that intentional states do inform material states. That we are writing about and discussing intentionality shows that intentional states can modify physical objects (texts, pressure waves, etc.)
Right but all of this goal directed decision making is ultimately mediated by physical processes happening in the brain. It also doesn't need to be determinate to be mediated by physical process.if I am commited, I will find other means. I planned on a certain route, encoded in my initial state, but as I turn the corner, I find my way blocked by construction. I find an alternate route to effect my intended end. In all of this, the explanatory invariant (which can revealed by controlled experiments) is not my initial physical state, but my intended final state. Clearly, intentional states can produce physical events.
Okay that makes sense. They certainly seem spatially dimensionless -- feelings and sentiments from a first person perspective, for example, seem to be present without any spatial location. I don't know biophysically how these types of things are encoded in a distributed, non localized fashion or in a temporal pattern of activity that doesn't have spatial dimension or etc so I couldn't say they are one or the other but I guess I'd say they could be spatially decomposable.To say that intentions have "no parts outside of parts" does not mean that they are simple (unanalyzable). It means that they do not have one part here and another part there (outside of "here"). My intention to to go to the store is analyzable, say, into a commitment and a target of commitment (what if is about, viz. arriving at the store.) But, my commitment and the specification of my commitment are not in different places and so are not parts outside of other parts.
How do you define 'biophysical support'? What in addition to that support would you say is needed for a full explanation?Of course my intention to go to the store has biophysical support. My claim is that its biophysical support alone is inadequate to fully explain it.
the contexts are different but, again they are both [the invariance of the goal and the ball's deterministic behavior] explainable by physical processes - some neurons are realizing a [physically instantiated] goal which is influencing via [probabilistic] physical interactions some other set of neurons which are informing behavior via other [probabilistic] physical interactions. The ball is a simple physical system which is directly being impacted by a relatively deterministic process.First, as explained in the scenario above, the invariance of the intended end in the face of physical obstacles shows that this is not a case covered by the usual paradigm of physical explanation -- one in which an initial state evolves deterministically under the laws of nature. Unlike a cannon ball, I do not stop when I encounter an obstacle. I find, or at least search for, other means. What remains constant is not the sum of my potential and kine
I am making broad-band metaphysical assumptions of materialism and emergentism which implies I take things like 'valence' and 'concepts' to be materially realized in physical systems. My defense of materialism is there is simply no evidence for any other substance in reality, and that everything -so far- that has seemed to be non-physical or have no physical basis has been shown to be mediated by physical process. My defense of emergentism is something like this.Second, you are assuming, without making a case, that many of the factors you mention are purely biophysical. How is the "valance component," as subjective value, grounded in biophysics? Especially when biophysics is solely concerned with objective phenomena? Again to have a "cognitive attitude" (as opposed to a neural data representation) requires that we actualize the intelligibility latent in the representation. What biophysical process is capable of making what was merely intelligible actually known -- especially given that knowledge is a subject-object relation and biophysics has no <subject> concept in its conceptual space?
Say you want a pizza. Pizza can be thought of as a circuit interaction between 'concept cells' [which -in turn- have activated the relevant visual, tactile, olfactory circuits that usually come online whenever you come into contact sensorily with pizza], particular reward pathway cells, cells which encode sets of motor commands. 'Wanting' could be perceived as signals from motor-command center which bombard decision-making circuits and compete against other motor-commands for control over behavior. All of these have an associated experience which themselves can be thought of as fundamental phenomena that are caused by the circuit interaction [e.g. pizza -- whatever is conjured when asked to imagine the concept: smells, visual content, taste; wanting-- 'feeling pulled' toward interacting with the object].Third, how is a circuit interaction, which is fully specified by the circuit's configuration and dynamics, "about" anything? Since it is not, it cannot be the explanation of an intentional state.
Thanks! Can't believe it's already over.You are welcome. I thank you for your thoughtful consideration and wish you and yours a joyful Christmas.
Okay the last sentence is what really clears it up. This sounds like nominalism, is that correct?The basis for logically distinct concepts need not be separate, or ontologically independent, objects. In looking at a ball, I might abstract <sphere> and <rubber> concepts without spheres existing separately from matter, or matter existing formlessly. Thus, by ontological separation, I mean existing independently or apart. By logical distinction, I mean having different notes of comprehension.
Further, while concepts may have, as their foundation in reality, the instances that can properly evoke them, they are not those instances. The concept <rubber> is not made of the sap of Hevea brasiliensis. Natural rubber typically its. So, generally, in contrasting logical and ontological I am contrasting concepts with their foundation in reality.
Finally, concepts are not things, but reified activities. <Rubber> is just a subject thinking of rubber.
There are a some things I have issue with the missing instruction argument:It is low resolution. My purpose was to convince the reader that we need more than mere "data processing" to explain awareness -- to open minds to the search for further factors.
In my book, I offer the following:
The missing-instruction argument shows that software cannot make a Turing machine conscious. If software-based conscious is possible, there exists one or more programs complex enough to generate consciousness. Let’s take one with the fewest possible instructions, and remove an instruction that will not be used for, say, ten steps. Then the Turing machine will run the same as if the removed instruction were there for the first nine steps.
Start the machine and let it run five steps. Since the program is below minimum complexity, it is not conscious. Then stop the machine, put back the missing instruction, and let it continue. Even though it has not executed the instruction we replaced, the Turing machine is conscious for steps 6-9, because now it is complex enough. So, even though nothing the Turing machine actually does is any different with or without the instruction we removed and replaced, its mere presence makes the machine conscious.
This violates all ideas of causality. How can something that does nothing create consciousness by its mere presence? Not by any natural means – especially since its presence has no defined physical incarnation. The instruction could be on a disk, a punch card, or in semiconductor memory. So, the instruction can’t cause consciousness by a specific physical mechanism. Its presence has to have an immaterial efficacy independent of its physical encoding.
One counterargument might be that the whole program needs to run before there is consciousness. That idea fails. Consciousness is continuous. What kind of consciousness is unaware the entire time contents are being processed, but becomes aware when processing has terminated? None.
Perhaps the program has a loop that has to be run though a certain number of times for consciousness to occur. If that is the case, take the same program and load it with one change – set the machine to the state it will have after the requisite number of iterations. Now we need not run through the loop to get to the conscious state. We then remove an instruction further into the loop just as we did in the original example. Once again, the presence of an inoperative instruction creates consciousness.
— Dennis F. Polis -- God, Sceince and Mind, p. 196
Thus, we can eliminate data processing, no matter how complex, as a cause of consciousness.
Why make the dichotomy between "natural" and "psychological" objects? I think psychological constructs that are well defined and have some clear physiological correlates [e.g. reward system and valence - limbic system; awareness - reticular activating system] are fair game for being considered 'natural phenomena'. I don't think there has to necessarily be a hard dichotomy. Besides, even in the physical sciences we don't have access to 'things in themselves' anyway, 'electron', 'proton' are known by virtue of the effect of their intrinsic properties on measurement devices and not by actually physically observing them. I feel this is analogous to the way in which we can't observe 'pleasure' or 'pain'. Of course those constructs are simply less well defined and less concrete, but -in the same way the atomic model was refined after more fine-grained experimentation- the psychological ones I feel can come somewhat closer to that in time. The point is that these fall within the range of natural objects albeit of a lesser degree as opposed to something wholly different so as to involve a completely different way of knowing or learning about them.I have no problem with empiricism. I see the role of philosophy as providing a consistent framework for understanding of all human experience. My observation is directed specifically at natural science, which I think is rightly described as focused on physical objects, or if you prefer, physical phenomena.
Aristotle, who I think has made as much progress as anyone on understanding the nature of consciousness, based his work on experience, but treated our experience as subjects on an equally footing with our experience of physical objects.
I think it can but it isn't necessary. Eg. you can have a family forcibly take you to a therapist every week and do the CBT homework in a case where you don't have the will to do it yourself. This can, over time, lead to a habit of going there. The willpower, which is needed most in the beginning when you need to effectively force yourself out of a habit of self-seclusion and negative self-talk, is not needed as much once it's become a habit to go to the therapist and work on the exercises.So, can willpower bring oneself out of depression? What's your take on that issue?
You explicitly state in the previous sentence the separation is [by substance?] mental. How would you categorize 'mental separation' if not as an ontological separation?I am not a dualist. I hold that human beings are fully natural unities, but that we can, via abstraction, separate various notes of intelligibility found in unified substances. Such separation is mental, not based on ontological separation. As a result, we can maintain a two-subsystem theory of mind without resort to ontological dualism.
Well I think this is a bit 'low resolution'/unspecific. It's definitively clear neurophysiological data alone isn't sufficient for awareness but that doesn't mean that a certain kind of neurophysiological processing is not sufficient - this is the bigger argument here.1. Neurophysiological data processing cannot be the explanatory invariant of our awareness of contents. If A => B, then every case of A entails a case of B. So, if there is any case of neurophysiological data processing which does not result in awareness of the processed data (consciousness) then neurophysiological data processing alone cannot explain awareness. Clearly, we are not aware of all the data we process.
I don't think the first sentence [of the two in bold] leads to the conclusion in the second sentence.2. All knowledge is a subject-object relation. There is always a knowing subject and a known object. At the beginning of natural science, we abstract the object from the subject -- we choose to attend to physical objects to the exclusion of the mental acts by which the subject knows those objects. In natural science care what Ptolemy, Brahe, Galileo, and Hubble saw, not the act by which the intelligibility of what they saw became actually known. Thus, natural science is, by design, bereft of data and concepts relating to the knowing subject and her acts of awareness. Lacking these data and concepts, it has no way of connecting what it does know of the physical world, including neurophysiology, to the act of awareness. Thus it is logically impossible for natural science, as limited by its Fundamental Abstraction, to explain the act of awareness. Forgetting this is a prime example of Whitehead's Fallacy of Misplaced Concreteness (thinking what exists only in abstraction is the concrete reality in its fullness).
To be orthogonal is to be completely independent of the other [for one to not be able to directly influence the other]. I gave examples of instances where physical objects result in changes in objects of experience and awareness itself. The fact that they can influence each other so plainly, I think, gives good credence to the fact that these two things are not orthogonal. And, more importantly, the fact that this is a unidirectional interaction [i.e. that only physical objects can result in changes to mental states and not the other way around without some sort of physical mediator] gives serious reason to doubt an fundamentality to the mental field - at least to me it's clear its an emergent phenomenon out of fundamental material interactions.3. The material and intentional aspects of reality are logically orthogonal. That is to say, that, though they co-occur and interact, they do not share essential, defining notes. Matter is essentially extended and changeable. It is what it is because of intrinsic characteristics. As extended, matter has parts outside of parts, and so is measurable. As changeable, the same matter can take on different forms. As defined by intrinsic characteristics, we need not look beyond a sample to understand its nature.
Intentions do not have these characteristics. They are unextended, having no parts outside of parts. Instead they are indivisible unities. Further, there is no objective means of measuring them. They are not changeable. If you change your intent, you no longer have the same intention, but a different intention. As Franz Brentano noted, an essential characteristic of intentionality is its aboutness, which is to to say that they involve some target that they are about. We do not just know, will or hope, we know, will and hope something. Thus, to fully understand/specify an intention we have to go beyond its intrinsic nature, and say what it is about. (To specify a desire, we have to say what is desired.) This is clearly different from what is needed to specify a sample of matter.
4. Intentional realities are information based. What we know, will, desire, etc. is specified by actual, not potential, information. By definition, information is the reduction of (logical) possibility. If a message is transmitted, but not yet fully received, then it is not physical possibility that is reduced in the course of its reception, but logical possibility. As each bit is received, the logical possibility that it could be other than it is, is reduced.
The explanatory invariant of information is not physical. The same information can be encoded in a panoply of physical forms that have only increased in number with the advance of technology. Thus, information is not physically invariant. So, we have to look beyond physicality to understand information, and so the intentional realities that are essentially dependent on information.
This scenario really highlights a moral paradox .. On face value both deontic or utilitarians could technically come to this sort of conclusion -- on animal rights ground it's important to reduce unnecessary suffering and death; on 'good' maximization grounds its important to maximize longevity/livelihood of animals. But effects of applying this sort of thing may, in the long run reduce fitness of animals via minimizing species diversity. This is in effect circumventing of selection constraints.Given that humans acquired the knowledge and technology to genetically modify organisms which enables us to increase the fitness of a (critically) endangered species, do we have the moral right to do so?
Hmm, I think the way you make sense of willpower is different than me. I don't think, for example, willpower or motivation is directed at a goal. You can get up 'feeling motivated', for example. In that case the motivation can be described as 'feeling driven' or 'excited/energetic'- like you are determined to get 'things' done - anything that comes in front of you not necessarily one thing in particular. Willpower, like I said before, seems like a 'capacity' or a general ability to control urges and manage actions - instead of eating a delicious pizza, deciding not to eat it.. instead of angrily lashing out at someone, showing restraint. It doesn't seem linked to a goal whereas intentions are always linked to a specific goal.. I can't think of someone saying to themself, without context or specific goal, 'I have intention'.. where as it makes sense for a person to say 'I have a lot of willpower' or 'I am/feel motivated'I'm not quite sure; but, there is a distinction I want to draw out between willpower and intentionality. Intentionality stands above willpower in that motivation and willpower is directed at some goal. But, here I go on about "depression". When the way out of the bottle is unknown to the fly, then willpower seems like the only thing that intent can resort to. So, yeah, willpower, willpower, willpower.
I'd say yes. The minimum you need for an intention is a decision to complete a goal. What else do you feel you'd need in order for an intention?Are they?
I'd say willpower.Then, what is the deciding factor in getting better?
Hmm, so I'm unsure what you are linking the directedness of an intention to. I am linking it to the goal of the intention - 'get out of the bottle' in the fly case, 'get better' in the human case. In that sense all intentions are directed. It doesn't matter the course of action or the way in which the goal is realized, only that a person or animal has decided - consciously or unconsciously- to complete a goal.The intent is still undirected. One doesn't know how the fly gets out of the bottle it is stuck in. Is it a matter of trial and error to try different methods of getting better or is the outcome of this undirected intentionality, manifest in trial and error, spontaneous? This would make "intent" somewhat synonymous with "willpower". But, the two aren't the same.
I think in the depression example the person has decided or intended to get better [decide or intend are synonymous] but has not put his decision into action because he lacks the willpower or has low capacity to direct his behavior toward his goals -- his ruminative and self-sabotaging habits are too ingrained for him to overcome at the moment.Yes, but, again taking the example of "depression". I have the intent to get better but it is undirected, otherwise, people would be able to simply will themselves out of that state of mind, which is quite rare. Hence, what do you think about this "undirected" aspect of intent? It seems more like, as you mentioned, and an unconscious thing that is in the background and never entirely realized unless some goal is accomplished unknowingly.
The object of an intent is a goal, and to intend to do something is to plan or decide to actively work toward that goal, at least that's my understanding. I don't see how intent can be passive... it can be unconscious in the sense that you may not be aware that a part of you intends to do something, but it always involves a decision of action to complete a goalWhat is intent? How much of intent is linked with willpower? Willpower seems like an active process, where intent can be passive.
The bold sounds more like an unconscious intention verses a willpower. I still don't understand how an intention can be undirected since it seems by definition to always be directed toward the completion of a goal.My main question is about undirected intentionality. These seem to be the passive aspect of willpower, like having a goal in the back of one's mind and working towards it.
For example, a deep mood that can be depression means that someone lacks the willpower to get better. I will stipulate here that this is 'undirected intentionality'.
So, how is intent shaped and formed to become a goal?