Comments

  • Theory of Natural Eternal Consciousness
    Sleep is analogous enough and I feel it's clear from that you drift away at some point -- you are conscious of your final awake moment and then cease being conscious of anything. It's not like when you wake up you remember being in a state of suspended animation in your last moment conscious before sleep; you just wake up and either have no memory of the night or you remember being in a dream.
  • The rise of ‘long form’ conversational podcasts
    Maybe I should provide some more public examples in case no one is aware.. It's looking like this episode in particular is becoming more and more sensationalized. 1500 dollar tickets?!
    https://www.chronicle.com/interactives/20190404-peterson

    Another was a three part series with Sam, Jordan, and guests including Eric Weinstein and Douglas Murray; hosted by Pangburn Philosophy:
    https://www.youtube.com/watch?v=jey_CzIOfYE
  • Frege on Spinozas "God"
    This is just pantheism. In a monistic system like Spinozas, reality would trivially satisfy the criteria for an impersonal God referent in that it is self subsistent, and simultaneously is the source and creator of things.
  • Killing a Billion
    ^ The problem is, you are forcing the situation in a way to exclude any option of acting other than a prejudicial one. But -in actuality- you always have the option of choosing arbitrarily or without prejudice when in a position of choice.

    Secondly I agree [and I think few wouldn't] that we're all capable of committing horrific acts -if that's what you're trying to show- milgram experiment has done this. There is a marked difference between reluctantly, and begrudgingly doing horrid acts in a case of forced choice for some 'noble good' -as in this scenario- and doing it on basis of heinous intentions for no reasonably positive good or for a reason that unnecessarily and carelessly disregards human value. Whatever reason you give for something like taking life, it better be a forced situation with zero alternative and the reason for the choice better be something ethically vetted by more than one person of diverse background and which respects human value.
  • Killing a Billion
    Well it’s an inescapably horrid predicament, and I think taking a lottery approach doesn’t do away with that fact. Maybe it adds issues in that it distributes the guilt since no particular person is the one to press the button..
  • Killing a Billion
    Ah, I was taking lottery in the more concrete sense of randomly selecting particular individuals to die vs finding ways to randomize the process in general - eg by releasing a non selective agent or some other way. The reason I prefer randomness is because it is not influenced by prejudice and it respects our ‘lottery of birth’ situation. I just simply think it’s a more fair system. Choosing by criteria can quickly become ethically complicated I think, particularly after exhausting whom we all collectively agree are ‘bad’. 1 billion is a lot
  • Killing a Billion
    Ask for volunteers -there are already plenty of people willing to undergo assisted sucide and euthanasia or I’m sure others who’ve lived long enough to feel content leaving, especially for the sake of a noble cause like preserving humanity- just aggregate all of these people globally, and if there’s a remainder, blindly release some sort lethal agent to which everyone, including you as a leader is susceptible to.

    The point is to make the decision explicit and as impartial, responsibility diffuse, and random as possible.
  • Which one outweighs the other Ethically?
    So will the man who is in discontentment, whom has changed in that 30 years, be the same person who did those traits in the last 30 years?

    The interesting part is will that transition of 30 years change that man entirely and perhaps not be held responsible for his past traits. If he is content for inhumane actions and immoral traits, wherein the people who criticizes him are not, is that ethically moral? is that, good?
    Well, because he has to live with those memories, even if we perceive him as a different person, he still is held accountable by his own psychology- his punishment is the resultant guilt and regret.

    Regarding the second point, I think it's clear ethics is in part a normative practice. We all have at least some basic, shared implicit interests which are typically protected under a social contract. I'm thoroughly convinced by the Rawlsian approach to this and think you can, in fact, arrive at a convergence in norms by virtue of these shared common interests. If you break one of these norms or live in a way counter them, even if it doesn't cause you strife or you don't have a problem with it, what you're doing would be an issue and you should change.
  • Which one outweighs the other Ethically?
    I think the question is, will my future self, in 30 years, look back and scream, or will he be content. If your ideal-self is not in line with how you actually are, and those people are criticizing you for those traits which you don't really think you should be doing either, then you better change them.
  • The Foolishness Of Political Correctness
    ↪aporiap Interesting. I suppose the etiquette that I would advocate is being honest. If someone expresses an offensive view, then the correct response is to refute it rather than try to censor it. Generally the gross stereotyping of which you speak is wrong and can be refuted. However artificial blindness is not the solution. Artificial blindness and bigotry feed into one another. A man notices a social phenomenon and proposes a typically false explanation; the academic says "this is a stereotype" or "this is a generalization," the man looks at it again and says that it is definitely going on, so he decides that the academic is full of crap and goes on with his bigoted explanation. The real way to deal with this is to realize that, if something exists at a rate greater than chance, then there must be a reason for it, although it is usually not the reason that you expect. So the generalizations and stereotypes should be used as grounds for further research to find an actual explanation.Ilya B Shambat

    I don't think mere honesty is etiquette. Just imagine this parent-child example: you've developed a bad smoking habit which you regret and you're one day caught by your strict, conservative mother. Scenario 1: She lambasts you, telling you if you continue you will amount to nothing, calling you names, and beating you. Vs. Your mother sitting you down, empathizing with you about how understandably enjoyable it is and how difficult it's been for you, and then tells you in a euphemistic, yet stern and clear manner that this is not good for you.. These are clearly two ways of relaying a piece of honest information.

    What I'm saying is PC shouldn't just be replaced by honesty delivered in whatever manner a person feels... there should be an etiquette to disagreeability. I've heard too many times, condescension, rigid belief and irrationality accompany anti-PCism that, to me, is just unnecessarily divisive and toxic. It's not everyone but it's a strong enough current for me to notice.
  • The source of suffering is desire?
    All of these schemas you mentioned not needed if people were not born. These are after-the-fact positions. A non-existent entity doesn't need to manage passions or self-actualize if not born. To be born in order to do these things would be using someone for this agenda, which seems odd to me. Like a journey that is inevitable for someone that didn't in fact have to be forced on that journey.
    I provided the alternatives to demonstrate that negative utilitarianism is itself just one of many theories, and that the antinatalist position depends on it. If bad does not necessarily equal suffering, then you cannot simply make the claim that we are obligated to prevent suffering. What makes us obligated here is the fact that suffering is considered bad. The implicit premise is that (1) we prevent something because it is bad (2) suffering is bad. If suffering is not itself intrinsically bad, there's no obligation.

    Again, this doesn't make sense in the light that no one inevitably has to exist to experience anything in the first place. This is all after-the-fact of already being procreated and then trying to find cultural values to buy into to make do. First the schema needs to be agreed to be right by the individual, and then it is carried forthwith. Of course various individual personalities and temperaments may find these schemas not for them and switch to other ones. Or, the person simply falls into modern default mode- cobbling together the various cultural environs and values immediately at hand (pragmatic hedonism if you will the modern "default mode" of most).
    Firstly, as I've said before, I think you're discounting that negative hedonic utilitarianism [the basis for the whole anti-natalist position] is itself a cultural construct. You'd be committing a naturalistic fallacy if you think just because suffering is uncomfortable it is forthrightly bad, and thus an unborn person is better in that state because it prevents him from suffering.

    Secondly my point there was countering the intuition based argument for the asymmetry of suffering/pleasure. It seems the only basis is that we have an intuition that preventing suffering is an obligation while promoting pleasure is not, but I am stating here that there are people with intuitions that promoting pleasure is something that you should promote and that they feel a kind of compassion or sympathy for people who aren't in that state.

    Also, this projected feeling of "missing out" for the as yet not existing person, can also be taken to absurd lengths. If taken to the logical extreme then we can say the billions and trillions of yet to be born people are missing out. But that is silly. Even more absurd would be that it is people's duty to those billions of non-existent people to keep having more people to reduce those non-existent people's "pain" of not existing and missing out. Obviously that makes no sense.
    I can take the 'obligation to prevent suffering' to absurd lengths as well. Why do anything at all, knowing that moving from my comfortable bed now will inevitably lead to discomfort [suffering]? Why walk down 5th avenue or drive a car when you are both putting yourself in a less relaxed state and making yourself at risk for being hurt in an accident or hit by a meteor? Sure they can lead to pleasures, but this isn't necessary and we are nevertheless obligated to proactively prevent suffering whenever possible, so in fact we really shouldn't even leave the house.
  • The source of suffering is desire?

    His argument takes the negative utilitarian idea extremely seriously. That is to say, harm is what matters, not pleasure. To restate this in a normative structure- potential parents are not obligated to bring someone who experiences joy/pleasure/positive value into the world. However, potential parents are obligated to prevent inevitable harms from occurring. One of his arguments comes from intuition. We don't usually feel pangs of compassionate sadness for the aliens not born to experience pleasure in a far away barren planet. We would most likely feel compassionate sadness, on the other hand, if we learned that aliens in a far away planet were born and were suffering. Suffering seems to matter more than bringing about pleasure in the realm of ethical decision-making. When prevention of all suffering is a guarantee and no actual person loses out on pleasure, this seems a win/win scenario.
    I think my main problem with the argument is that bad/good ascriptions are not necessarily applicable to suffering or pleasure in themselves. Badness or goodness are separable from hedonic states. They should be defined in reference to some goal or [in the general human sense] with respect to whether something leads one closer to 'well being' or whether it leads them away from that. That makes intuitive sense from the utilitarian position [the good is a goal to which we reach, things are good if they result in the good], even in the case of a hedonic utilitarianism [which I assume is Barren and your position] where what's good is anything that minimizes suffering [your goal]. But that's just one utilitarian theory. Badden's argument would fail if you take anything else as 'the good', which many people do [spinoza's good is attaining freedom by managing passions; maslow's self actualization; societal stability; etc]. And even from the hedonic position, I simply disagree with his contention that there's an assymetry. I actually think many people do think the lack of an ability to experience pleasure [hell, even experiencing at all] is a wrong - it's what motivates my friend to get on my ass about not putting myself out of my comfort zone - because apparently I'm missing out. He [and other friends] feel obligated to push and challenge me, I'm sure you've had friends do the same. They are clearly operating under utilitarian assumption - that I'm not experiencing as much pleasure as I could because I'm limiting myself... a potential human would be limited in just the same way. Would you not say they intuitively feel missing out is a wrong in itself? If so then how is intuition alone enough to justify the asymmetry?
  • Proof that something can never come from nothing
    I don’t even think there’s an empirical equivalent for nothing. Vacuum space is not empty. Whether there are things like quantum fields or whether they’re just modeling constructs, there’s an intrinsic energetic property to vacuum space which makes it not merely nothing in the philosophical sense. So maybe this argument is equivalent to an argument negating creation from spaghetti soup for all intents and purposes
  • The source of suffering is desire?
    The main point is that in the procreational decision, there is an asymmetry as to the absence of an actual person in regards to an absence of suffering and pleasure. It is always good that someone did not suffer, even if there is no actual person to be around to know this or enjoy the not suffering. It is not bad (or good) if someone does not experience pleasure, unless there was an actual person who was around to be deprived.
    Well I think you mean uselessly or needlessly suffer here. I do not think people would agree with the bold if that suffering resulted in a net positive. If you restrict it to needless suffering then you would not get to an antinatalist position, unless you're in a situation where you can guarantee your child will uselessly suffer [you're pregnant in a concentration camp with no foreseeable chance to escape].

    Also, just in general, forcing someone else into existence to experience some form of adversity to get stronger is still wrong. It's like forcing someone into an obstacle course they did not ask for, and can never leave without killing themselves. Well, I guess it's okay to stay and try and do the best, but it was not necessarily good to give that obstacle course in the first place. No one needs to do anything prior to birth, being that, as you pointed out, there is no actual person before birth who needed to go through life in the first place, good, bad, or ugly. By not having the person, it is no harm, no foul.
    The way you're framing it makes it sound wrong. Nobody gives birth to force someone to experience adversity, this is different from the [inevitable] fact that they will face adversity. And, having the knowledge that your child will face adversity should be placed on equal value-ground as having the knowledge that your child, in existing, will experience pleasure. Else there's a double-standard.

    Regarding the second bolded point, the problem is antinatalism isn't neutral with respect to having a child, it's defined as a negative stance on having children. This implies one is preferentially focusing on the negative aspects of living as opposed to weighing the negatives and positives equally. The fact that there are people that choose to live for the sheer enjoyment of it, and would prefer to live even in spite of their suffering, and that some even value their suffering implies you cannot assume a child's stance on the matter, so suffering shouldn't be used as a pretext to prefer not procreating [unless you're in a situation that guarantees they'll undergo useless suffering]
  • The Foolishness Of Political Correctness
    I think you have to be careful with this. You mention in OP that unchecked rudeness is not what you advocate, but I've seen it all too often, in social circles where PC is devalued, that this enables -at the very least- non-constructive criticism and gross, blatant stereotyping. Sure, you get the honesty in those circles, but you do not get the respect. Let me be clear that I'm not in defense of political correctness, but I disagree that you can simply just attack it without providing an alternative etiquette for expressing plainly honest beliefs. I think there's a way to promote that or perhaps enable it after setting the context for potentially stinging statements. Maybe its enough to reiterate the importance of giving honest opinion or maybe the rule is PC in certain contexts - work place, sensitive social gatherings [funerals] - and non-PC in others. The point is you should offer an alternative
  • The source of suffering is desire?
    That's one of the reasons I'm an antinatalist though. Why put someone through the struggle (even for some sort of enlightenment) if they didn't have to go through it in the first place.
    What makes you think it is not necessary to go through? I mean, fundamentally, the sort of satisfaction and enjoyment you get from enduring through a struggle is made what it is by the suffering. If you were given a nobel prize for completely nothing, you would be missing out on something that Einstein wouldt've - the satisfaction becomes not just enhanced but partly made up of feelings of self-validation [i.e. that you really were able to do it] self-satisfaction and accomplishment. And these feelings don't simply just get forgotten, they're embedded in the entire experience which is impressed in memory and accessible in mind.

    Secondly I really am struggling to understand the antinatalist premise. An unborn baby does not feel anything. It will never will know what it feels like to not have to go through pain. Secondly, not every individual evaluates suffering and non-suffering like you.. it's not some objective hedonic calculus, every individual makes a determination of the worthiness of living on their own. For some [if not all, barring antinatalists, the severely depressed and the oppressed] it is even un-quantifiably valuable to live even in spite of suffering. So the underlying argument for antinatalism seems just based on an impossible speculation.

    And I think you are also completely discounting the fact that pleasure and value are separable concepts. Something doesn't need to be pleasurable to be a valuable or meaningful experience. I mean I find my entire college experience to have been incredibly formative and meaningful.. sure I would change certain things but I would never not go through the school because it sucked [and I did suffer] to study.. I actually, really, would chose the opportunity to go through it again because it made me.
  • Infinite Regression
    Say what? Nothing has no properties.
    It is an absence.
  • Being and Death
    ^To second poster, in what way have you experienced before being born? The closest analogy during life, coma and sleep, involve no experience. You fall asleep or go unconscious and suddenly you hear your alarm ringing. No sense of time passing, no self or external awareness. That’s how it is for me. I don’t see any reason to assume consciousness comes on gradually, in every case when it’s lost it’s clear it simply turns on

    To OP, I think you are right. Empirically it makes sense to assume we’d only ever be aware of experiencing, just simply by definition.
  • Teleological Nonsense

    I don't think that the idea that agents act for ends requires that they only act for one end.

    Also, I think part being a free agent is our ability to confer new value by re-purposing objects and capabilities. It is part of what Aquinas calls our participation in Divine Providence by reason. That is why I object to a narrow natural law ethics that does not allow for the legitimate creation of new ends.
    Well so conferring new value via re-purposing is something different than instrinsic purpose/teleology. Are you implying here that the ends of things [e.g. the end of an enzyme - to catalyze reaction, the end of a seed is to become a plant] are human designated?


    Of course. That is one reason free will is possible. There are multiple paths to human self-realization.
    I don't understand this since we are speaking about objects here and not people. I also think, if anything, a teleological framework would necessarily be limiting compared to a teleologically blank humanity since it rigidly identifies some set of ends as natural to an object/person. Humans wouldn't have the freedom to not self realize if their nature was to self-realize, for example.

    This has to do with physical determinism vs. intentional freedom. If no free agent is involved, physical systems have only a single immanent line of action and so act deterministically. If there are agents able to conceive alternative lines of action, then multiple lines of action are immanent in the agents, and so we need not have deterministic time development.
    I'm unsure what free will has to do with teleology. Secondly this is a human specific thing, free will doesn't have anything to do with physical systems, they cannot choose actions because they lack brains

    Are you thinking that the existence of ends entails determinism? I don't.
    Well my point in that excerpt was to just highlight that ends are not intrinsic to objects alone. A gene, for example, can NOT give rise to a protein all by itself, despite the function [or end] of a gene being to give rise to a protein. It's the gene plus the cellular machinery which gives rise to a protein.

    But I think teleology definitely entails determinism or at least 'probabilistic determinism' [given initial conditions + context A --> 80% chance of P]. How else would ends be reproducibly met?
  • Teleological Nonsense
    Dfpolis, thanks for this OP. I still owe you a response in the other thread, I will get to it soon.

    I have qualms with the idea of teleology being intrinsic to objects [this is a sub issue I suppose not directly relevant to OP though it could be construed as relevant since you are using it in the aristotelian sense of final cause. And aristotle makes final cause intrinsic to objects; either way I don't have issue in principle for teleological explanations, if they are qualified by context]. For the simple reason that it just seems short sighted to ascribe one specific goal [or even a set of goals] to a physical object or biological entity [it's something like functional fixedness]. Say there's a blanket... sure it was made for a particular purpose - to keep a person warm if it's cold. But in the summer when you're on a picnic that thing is just as well a mat for food.. or if you've gotten dirt on your toe, it can be used to clean the dirt off, or if their's an armed robber in your hose, to hide your belongings or you.

    Secondly, different objects can perform the same function -- sure there exists a particular receptor for chemical A but there's also this other receptor B [who tends to bind chemical B] but which can also bind chemical A. When you take out that receptor A, receptor B can stand in for A enough to rescue the deficiency. You can use a heater instead of a blanket to keep you warm. Or a sweater.

    The point is the goals seem separable from objects [1] and objects seem separable from specific goals [2] which I think points to a relational dependency of goals. It's not the object that intrinsically has an end or goal, its the context with the object and their relationships that makes the object repeatedly reach a particular end.

    I think I'd be fine with the idea of ends if they're restricted to a given contextual relationship [given the context: the setting of cold weather, the man who is cold, the blanket in the room -- the blanket will reach end of keeping man warm].
  • Monism
    My critique - but it's too shoddy to be dignified by that title - is a 'critique' of any philosophical operation that tries to find some way of characterizing 'everything.' I think that any feature of the world is capable of having an ontological status applied to it.
    I read through the OP again to just clarify why this is an issue [finding some way of characterizing everything] and it seems like the reason you focus in on monism here is essentially because of the plainly clear plurality we experience. But I also think it's plainly clear that commonalities underpin pluralities; and these form the basis for the search for unifying principles and substances.

    Also I don't think it's an arbitrary process - the feature must be constrained by what it means for something to be an ontological substance and that is something which I think people generally agree [or at least there's a convergence of thought] that fundamentality and invariance are the defining features of an ontological substance. It could be that two things have those properties or maybe 3 or 4 or just 1. I think it's then a matter of empirical study to limit the possibilities.
  • Monism
    I think you're right - and that anyone would agree that both were composed of legos. Both the bridge and the man are decomposable, in the sense you mentioned. But what is the face-value distinction - the 'looking different' - composed of? And is it decomposable? What is the 'ephemeral'?


    They look different because their atoms are arranged differently and are bonded in different ways. I don't think relations [e.g. spatial and bonding relations] are reducible, but I do think they are entirely dependent on and caused by the relata, which must share properties in order to interact. If you don't have two atoms with the property of charge, you cannot have an electrostatic interaction. If you don't have two atoms with spatial locations, you cannot locate them relative to each other or some external reference. Secondly, relations don't outlive the interactors - it's the interactors [as bundles of properties] which can form relations with other entities. I believe it's fundamental properties, which cause relations. This makes relations not fundamental

    Regarding 'ephermalness' - One thing I am implicitly assuming in my understanding of substance is the importance of invariance and generality. Aside from being non-decomposable, a substance should presumably be present whenever and where-ever there are objects -- it should be time and location invariant . Specific relations are neither time or location invariant and so they cannot be fundamental substances. That isn't to say they don't exist or don't play an important role [they differentiate things for crying out loud].
  • Monism

    Here's the philosophical 'gotcha'. But I think it's legitimate.

    Trees and bananas are, obviously, composed of physical particles, differently. No 'face-value' differentiation required. So how do you decompose the difference between the face-value difference of trees and bananas - and trees and bananas with no face-value difference.

    You seem savvy, so I'm sure you anticipate this kind of thing. Nevertheless - how?
    I don't think their different arrangement should matter that much. Spatial and bonding relations, which serve as the basis for difference between objects, are not fundamental or substantial, they just are ephermal states. e.g. You have a bunch of lego blocks and build a bridge and man out of it. Sure, they look different -on face value- and that's because of how you've arranged the blocks, but I don't think anyone would say the man or the bridge are their own separate substances, no the substance is the thing which is invariant and composes them.

    What I’m trying to say is that the difference between saying there’s a difference and saying there is no difference is that, in one case, you’re giving ontological status or significance to relations between parts and in the other you are not- and only give ontological status to the parts themselves

    If you say that is a conceptual operation - applying ontological status to a feature of the world- then your critique would apply to any metaphysical claim
  • Monism

    I agree with this. The 'fact of the matter' is surely independent of conceptualization. But to determine whether our metaphysical ideas correspond with reality (if such a thing is possible), we have to figure out what we're saying. What does 'reality is composed of one substance' mean?

    What I'm suggesting is that, if you break down the concept, you see that its not about the world at all. It's a conceptual operation that has overstepped its bounds. My interest in this topic corresponds exactly to my feeling 'the fact of the matter' shouldn't depend on conceptualization.

    (We could also say the fact of the matter about whether reality is large shouldn't depend on conceptualization. That's true, I think. But it's not really clear what it would mean. It seems to be mixing something up. )
    I am unsure how it's just a conceptual operation. To say reality is composed of one substance means something. You assert that all objects are decomposable or analyzable into fundamental units which, at a certain level, are indistinguishable and interchangeable with one another. Take a carbon atom from joe and a carbon atom from the tree outside your window, and, if you look at them, you will find no fundamental difference, no individual footprint. Take the electron from carbon 1 and you can replace it with the electron from carbon atom 2.

    In a multi-substance ontology, the parts are not interchangeable, you loose something by taking out one 'non-matter' part [e.g. a 'spirit'] and replacing it with a matter part.

    What does 'fundamental' mean?
    Is face-value difference fundamental?
    Why or why not?
    Fundamental means not decomposable. A quark or boson, more fundamentally -- the properties which define 'quark' or 'boson'. These properties -charge, mass, etc.], would be fundamental.

    Face-value difference is not fundamental. On face value, a tree and banana look completely different - different sizes, shapes, colors-. But the components of these objects are fundamentally the same -physical particles.
  • Monism


    I disagree with the OP. That a defense of monism has to start with the reconciliation of some ontic plurality doesn't mean reality itself necessitates plurality, it just means it necessitates non-monistic metaphysical systems which stand in contrast to and give sense to the term 'monism'. But a metaphysical system is not identical to reality, it's just a set of beliefs. The fact of the matter about whether reality is composed of one substance or not shouldn't depend on conceptualizations.

    Also I am unsure why one can't simply 'fall' for pluralism by mistakenly raising a category or other distinction to the ontic level. An otherwise non-ontic distinction becomes a distinction between substances by face-value observation of difference. Why not it simply become apparent, upon analysis, that the face-value difference is not fundamental or ontic?
  • Intentional vs. Material Reality and the Hard Problem

    No, they are hypothetical. Your Wikipedia reference says "The grandmother cell, sometimes called the "Jennifer Aniston neuron," is a hypothetical neuron that represents a complex but specific concept or object."

    The support cited in the article is behavioral (which is to say physical), with no intentional component. I am happy to agree that behavioral events require the firing of specific neural complexes. The problem is, a concept is not a behavior, but the awareness of well-defined notes of intelligibility. The article offers no evidence of awareness of the requisite sort.
    I jumped the gun by saying they are empirically supported. But as you can see I didn't conjure them up! The more accurate thing to say is that there are neurons in higher-level brain regions which fire selectively to seemingly abstract stimuli. Whether that indicates they fire in response to a given 'concept' or in response to a given feature shared between all those stimuli [e.g. the presence of 'almond-shapes' eyes] or some other feature coincidentally related to the 'category' of the stimuli presented is not known.

    Christof Koch, Quiroga and Fried have shown some interesting findings though.

    Also, what's not to say that when these higher-level cells fire, they are, by virtue of their connections, contributing to the global 'network' of cells which are active and mediating conscious awareness, and thus the entire system becomes 'aware' of the image or face or higher-level feature? That seems to account for the intentional component no?
  • Intentional vs. Material Reality and the Hard Problem

    It is Moderate Realism, which sees universal concepts grounded in the objective character of their actual and potential instances rather than in Platonic Ideas or Neoplatonic Exemplars. Nominalism and conceptualism see universals as categories arbitrarily imposed by individual fiat or social convention.
    So, something like aristotelian realism about universals? Well that would make them more than a mere insignificant mental abstraction, it's a real thing in the world by your take, albeit inextricably linked to the particular. I'm not familiar with terms like 'notes of comprehension' or 'essential notes'. You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension. Can you unpack this term?

    No. Notice that we run all the original instructions. Any program that simply runs an algorithm runs it completely. So, your 'atmospheric sampler' program does everything needed to complete its computation.
    I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.] As soon as the instruction is removed, thus it ceases being a sampler and, suddenly would become a sampler [because it now has the capacity to sample] once the instruction is re-introduced. Even though it runs through the entire program in the thought experiment, during the period when the instruction is removed, the program is in a state where it no longer has the looping/iterative-sampling capacity hence the fact that it is not a sampler during that period.

    The problem is, we have no reason to assume that the generation of consciousness is algorithmic. Algorithms solve mathematical problems -- ones that can be presented by measured values or numerically encoded relations. We have no such representation of consciousness. Also, data processing operates on representations of reality, it does not operate on the reality represented. So, even if we had a representation of consciousness, we would not have consciousness.
    What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.

    Also I don't get the point about why operating on reality representations somehow makes data-processing unable to be itself conscious. The kind of data-processing going on in the brain is identical to the consciousness in my account. It's either that or the thing doing the data processing [i.e. the brain] which is [has the property of] consciousness by virtue of the data processing.

    In the computational theory of mind, consciousness is supposed to be an emergent phenomenon resulting from sufficiently complex data processing of the right sort. This emergence could be a result of actually running the program, or it could be the result of the mere presence of the code. If it is a result of running the program, it can't be the result of running only a part of the program, for if the part we ran caused consciousness, then it would be a shorter program, contradicting our assumption. So, consciousness can only occur once the program has completed -- but then it is not running, which means that an inoperative program is causes consciousness.
    These choices are not exhaustive.. Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order. It's okay for some of the frames to be skipped because the viewer can infer the scene from the adjacent frames. In this instance the program is a movie player not because of the mere presence of the instructions nor because of the output of one or another frame [be it the middle frame or the last frame]. It also couldn't just result from only some of the instructions running, it requires them all to run properly for at least most [a somewhat arbitrary, viewer-dependent number] of the iterations so that enough frames are output for the viewer to see some semblance of a movie. In this case it's not the output of one loop that results in consciousness nor the output of some pre-specified number of sequential iterations that results in the program being a movie player. Instead it is a combination of a working program and some number of semi-arbitrary and not-necessarily sequential outputs which result in the program being a movie player. This is not even a far-out example, it's easy to imagine a simple, early american projector which operates via taking film-strip.. Perhaps sections of the film-strip are damaged which leads to inadequate projection of those frames. Would you say this projector is not a movie-player if you took out one of its parts before it reached the step where it's needed and then impossibly becomes a movie-player once the part is re-introduced right before it was needed?

    We are left with the far less likely scenario in which the mere presence of the code, running or not, causes consciousness. First, the presence of inoperative code is not data processing, but the specification of data processing. Second, because the code can be embodied in any number of ways, the means by which it effects consciousness cannot be physical. But, if it can't be physical, and it's not data processing, what is the supposed cause?
    I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.

    No, not at all. It only depends on the theorem that all finite state machines can be represented by Turing machines. If we are dealing with data processing per se, the Turing model is an adequate representation. If we need more than the Turing machine model, we are not dealing with data processing alone, but with some physical property of the machine.

    I agree that the brain uses parallel processing, and might not be representable as a finite state machine. Since it is continually "rewiring" itself, its number of states may change over time, and since its processing is not digital, its states may be more continuous than discrete. So, I am not arguing that the brain is a finite state machine. I am arguing against those who so model it in the computational theory of mind.
    I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described? Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality. I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.

    This assumes facts not in evidence. David Chalmers calls this the "Hard Problem" because not only do we have no model in which a conglomerate of neurons operate to produce consciousness, but we have no progress toward such a model. Daniel Dennett argues at length in Consciousness Explained that no naturalistic model of consciousness is possible.
    I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem. Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] . In this case it's the relations between the parts which result in the phenomenon cannot be reducible to simply the parts. I'd imagine there's some sort of way you can account for consciousness by the interactions of the component neurons in the system

    I also haven't read Dennett's arguments so I can't comment on them.

    It is also clear that a single physical state can be the basis for more than one intentional state at the same time. For example, the same neural representation encodes both my seeing the cat and the cat modifying my retinal state.
    Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentially

    "Dichotomy" implies a clean cut, an either-or. I am not doing that. I see the mind, and the psychology that describes it, as involving two interacting subsystems: a neurophysical data processing subsystem (the brain) and an intentional subsystem which is informed by, and exerts a degree of control over, it (intellect and will). Both subsystems are fully natural.

    There is, however, a polarity between objects and the subjects that are aware of them.
    Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain. I think this is the only difference we have. I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain.

    Please rethink this. Kant was bullheaded in his opposition to Hume's thesis that there is no intrinsic necessity to time ordered causality. As a result he sent philosophy off on a tangent from which it is yet to fully recover.

    The object being known by the subject is identically the subject knowing the object. As a result of this identity there is no room for any "epistic gap." Phenomena are not separate from noumena. They are the means by which noumena reveal themselves to us.

    We have access to reality. If we did not, nothing could affect us. It is just that our access is limited. All human knowledge consists in projections (dimensionally diminished mappings) of reality. We know that the object can do what it is doing to us. We do not know all the other things it can do.

    We observe everything by its effects. It is just that some observations are more mediated than others.
    I'm not entirely familiar with the Kantian thesis here, but I think the fact that our physical models [and that the entities within the models] change with updated evidence and the fact that fundamental objects seem to hold contradictory properties - wave-particle nature imply that theoretical entities like the 'atom' etc are constructs. Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.

    This is very confused. People have learn about themselves by experiencing their own subjectivity from time immemorial. How doe we know we are conscious? Surely not by observations of our physical effects. Rather we know our subjective powers because we experience ourselves knowing, willing, hoping, believing and so on.
    I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.
  • Intentional vs. Material Reality and the Hard Problem
    A key position in the Age of Reason was the rejection of "occult" properties, but here you are positing "concept cells" as a cover for abject ignorance of how any purely material structure can explain the referential nature of intentional realities. Where are these concept cells? How do they work? They are as irrational and ungrounded as the assumption of homunculi.
    They're empirically supported, I didn't conjure these things up: https://en.m.wikipedia.org/wiki/Grandmother_cell . It's a set of spatially distributed, sparsely firing neurons which activate when particular category of object - faces, hands, etc. - are presented irregardless of form of perception (whether it's the name 'face' that is heard, an image of a face seem, a face that is felt).

    I'll respond to the rest soon
  • Intentional vs. Material Reality and the Hard Problem

    That is not what I explained that I mean by concepts being orthogonal. I explicitly said, "... logically orthogonal. That is to say, that, though they co-occur and interact, they do not share essential, defining notes." Having non-overlapping sets of defining notes makes concepts orthogonal -- not the consideration of interactions in their instances, which is a contingent matter to be resolved by reflecting on experience.
    I think my issue stems from not being able to separate 'ontological independence' from logical orthogonality. I mean to assert that concepts and intentions exist and are distinct from their material instances and yet to then say these things are somehow still of same ontological type [i.e. physical] as physical objects seems difficult to reconcile [what makes them physical if they're not composed of or caused by physical material?]. It just seems like an unsubstantiated assertion that they are ontologically the same.

    Once you make the implicit assumption they are ontologically distinct then it becomes clear that any interaction between intentional states and physical substance serves as a counterargument to their
    being distinct from materiality [since material and nonmaterial have no common fundamental properties with which to interact with each other (charge; mass; etc)]. The only alternative, for me then, is that they are either nonexistent-complete fictions- or something that has the same essential basis as materiality and is somehow emergent and completely dependent on lower-level physical activities.

    Concepts are abstractions and do not "interact." All that concepts do (their whole being) is refer to their actual and potential instances. Still, it is clear to all but the most obdurate ideologues, that intentionality can inform material states. Whenever we voice a concept, when we speak of our intentions, our speech acts are informed by intentional states. Conversely, in our awareness of sensory contents, material states inform the resulting intentional states. So, the fact that intentional and material abstractions are orthogonal does not prevnt material and intentional states from interacting.
    Intentional states inform physical states but I mentioned before [and I think this is important] that this is always by virtue of a physical-material mechanism. There is activity pattern in higher level areas of brain which trickle down via some series of physical communication signals into a pattern of behavior. The 'seeming' ontological jump from intentional state [not-physical] to physical change in muscle activity is what I argue never happens because there must ultimately be some physical nature to that intentional state in order for it to lead to a physical change.

    This misses the fact that intentional states do inform material states. That we are writing about and discussing intentionality shows that intentional states can modify physical objects (texts, pressure waves, etc.)
    Again, I can't think of how this could happen without a physical mechanism. And in fact it is currently made sense of in terms of physical mechanisms [albeit coarse grained and drafted at present] - as a hypothetical mechanism: some web of 'concept-cells' [higher level cells in a feedforward neural circuit that invariantly fire in response to a very specific stimulus or object class] are activated in conjunction with reward circuitry and a motor-command sequence is initiated.

    if I am commited, I will find other means. I planned on a certain route, encoded in my initial state, but as I turn the corner, I find my way blocked by construction. I find an alternate route to effect my intended end. In all of this, the explanatory invariant (which can revealed by controlled experiments) is not my initial physical state, but my intended final state. Clearly, intentional states can produce physical events.
    Right but all of this goal directed decision making is ultimately mediated by physical processes happening in the brain. It also doesn't need to be determinate to be mediated by physical process.

    To say that intentions have "no parts outside of parts" does not mean that they are simple (unanalyzable). It means that they do not have one part here and another part there (outside of "here"). My intention to to go to the store is analyzable, say, into a commitment and a target of commitment (what if is about, viz. arriving at the store.) But, my commitment and the specification of my commitment are not in different places and so are not parts outside of other parts.
    Okay that makes sense. They certainly seem spatially dimensionless -- feelings and sentiments from a first person perspective, for example, seem to be present without any spatial location. I don't know biophysically how these types of things are encoded in a distributed, non localized fashion or in a temporal pattern of activity that doesn't have spatial dimension or etc so I couldn't say they are one or the other but I guess I'd say they could be spatially decomposable.

    Of course my intention to go to the store has biophysical support. My claim is that its biophysical support alone is inadequate to fully explain it.
    How do you define 'biophysical support'? What in addition to that support would you say is needed for a full explanation?

    First, as explained in the scenario above, the invariance of the intended end in the face of physical obstacles shows that this is not a case covered by the usual paradigm of physical explanation -- one in which an initial state evolves deterministically under the laws of nature. Unlike a cannon ball, I do not stop when I encounter an obstacle. I find, or at least search for, other means. What remains constant is not the sum of my potential and kine
    the contexts are different but, again they are both [the invariance of the goal and the ball's deterministic behavior] explainable by physical processes - some neurons are realizing a [physically instantiated] goal which is influencing via [probabilistic] physical interactions some other set of neurons which are informing behavior via other [probabilistic] physical interactions. The ball is a simple physical system which is directly being impacted by a relatively deterministic process.

    Second, you are assuming, without making a case, that many of the factors you mention are purely biophysical. How is the "valance component," as subjective value, grounded in biophysics? Especially when biophysics is solely concerned with objective phenomena? Again to have a "cognitive attitude" (as opposed to a neural data representation) requires that we actualize the intelligibility latent in the representation. What biophysical process is capable of making what was merely intelligible actually known -- especially given that knowledge is a subject-object relation and biophysics has no <subject> concept in its conceptual space?
    I am making broad-band metaphysical assumptions of materialism and emergentism which implies I take things like 'valence' and 'concepts' to be materially realized in physical systems. My defense of materialism is there is simply no evidence for any other substance in reality, and that everything -so far- that has seemed to be non-physical or have no physical basis has been shown to be mediated by physical process. My defense of emergentism is something like this.

    I couldn't tell you how things like valence are exactly biophysically grounded because that's still something being explored but it seems to involve activity in a well-defined anatomical reward circuit involving parts of cortex and limbic system which itself seems involved across all forms of 'liking' or 'pleasure' [sexual, drug-induced, food-induced] and seems common to a variety of animal species.

    I can imagine mental [cognitive] schemas [theories of self, mind and world], as just some very complex web of connections with specific connection strengths between various spatially distributed, semi-autonomous neural populations and the activity patterns between them. The 'information' is the connection scheme + the various activity patterns elicited intrinsically. Again it doesn't have to be deterministic to be governed by physical laws.

    Third, how is a circuit interaction, which is fully specified by the circuit's configuration and dynamics, "about" anything? Since it is not, it cannot be the explanation of an intentional state.
    Say you want a pizza. Pizza can be thought of as a circuit interaction between 'concept cells' [which -in turn- have activated the relevant visual, tactile, olfactory circuits that usually come online whenever you come into contact sensorily with pizza], particular reward pathway cells, cells which encode sets of motor commands. 'Wanting' could be perceived as signals from motor-command center which bombard decision-making circuits and compete against other motor-commands for control over behavior. All of these have an associated experience which themselves can be thought of as fundamental phenomena that are caused by the circuit interaction [e.g. pizza -- whatever is conjured when asked to imagine the concept: smells, visual content, taste; wanting-- 'feeling pulled' toward interacting with the object].
  • Question About Consciousness
    You're speaking of flow state it sounds like. See wikipedia link for associated psychologists and philosophers.
  • Intentional vs. Material Reality and the Hard Problem
    Hi DfPolis -- I'm going to break my response up into parts since it's grown quite long already
    You are welcome. I thank you for your thoughtful consideration and wish you and yours a joyful Christmas.
    Thanks! Can't believe it's already over.

    The basis for logically distinct concepts need not be separate, or ontologically independent, objects. In looking at a ball, I might abstract <sphere> and <rubber> concepts without spheres existing separately from matter, or matter existing formlessly. Thus, by ontological separation, I mean existing independently or apart. By logical distinction, I mean having different notes of comprehension.

    Further, while concepts may have, as their foundation in reality, the instances that can properly evoke them, they are not those instances. The concept <rubber> is not made of the sap of Hevea brasiliensis. Natural rubber typically its. So, generally, in contrasting logical and ontological I am contrasting concepts with their foundation in reality.

    Finally, concepts are not things, but reified activities. <Rubber> is just a subject thinking of rubber.
    Okay the last sentence is what really clears it up. This sounds like nominalism, is that correct?

    It is low resolution. My purpose was to convince the reader that we need more than mere "data processing" to explain awareness -- to open minds to the search for further factors.


    In my book, I offer the following:
    The missing-instruction argument shows that software cannot make a Turing machine conscious. If software-based con­­­scious is possible, there exists one or more programs complex enough to generate consciousness. Let’s take one with the fewest possible instructions, and remove an instruction that will not be used for, say, ten steps. Then the Turing machine will run the same as if the removed instruction were there for the first nine steps.

    Start the machine and let it run five steps. Since the program is below minimum complexity, it is not conscious. Then stop the machine, put back the missing instruction, and let it continue. Even though it has not executed the instruction we replaced, the Turing machine is conscious for steps 6-9, because now it is complex enough. So, even though nothing the Turing machine actually does is any different with or without the instruction we removed and replaced, its mere presence makes the machine conscious.

    This violates all ideas of causality. How can something that does nothing create consciousness by its mere presence? Not by any natural means – especially since its presence has no defined physical incarnation. The instruction could be on a disk, a punch card, or in semiconductor memory. So, the instruction can’t cause consciousness by a specific physical mech­anism. Its presence has to have an immaterial efficacy independent of its physical encoding.

    One counterargument might be that the whole program needs to run before there is consciousness. That idea fails. Con­sciousness is continuous. What kind of consciousness is unaware the entire time contents are being proces­sed, but becomes aware when processing has terminated? None.

    Perhaps the program has a loop that has to be run though a certain number of times for consciousness to occur. If that is the case, take the same program and load it with one change – set the machine to the state it will have after the requisite number of iterations. Now we need not run through the loop to get to the con­scious state. We then remove an instruction further into the loop just as we did in the original example. Once again, the presence of an inoperative instruction creates consciousness.
    — Dennis F. Polis -- God, Sceince and Mind, p. 196

    Thus, we can eliminate data processing, no matter how complex, as a cause of consciousness.
    There are a some things I have issue with the missing instruction argument:

    1. Why would the program not be conscious when running the first 5 steps of the algorithm? Why not it simply loose consciousness when the program has reached the missing instruction in the same way a computer program freezes if there is an error in a line of the code and simply resume running once the code is fixed?

    If you say it's because it's simply not complex enough to be conscious because it is missing that line of code or that rule then there are two issues here --

    (i.) The way this scenario is construed makes an issue for any kind of binary descriptor of a continually running algorithm [e.g. any sort of game, any sort of artificial sensor, any sort of continually looping program] not just specifically for ascribing consciousness to an algorithm. Eg. Say you call this algorithm an 'atmospheric sampler'. Say you take one instruction out now it is no longer an atmospheric sampler algorithm because it cannot sample, let it run until after the instructional code, now repair the instruction and it has become an atmospheric sampler seemingly a-causally.

    (ii.) The implicit assumption is that the complexity of an algorithm is what generates consciousness and that complexity is reduced by reducing the number of instructions. But this is not necessarily true:
    a. It could be that an algorithm with a particular set of instructions, by virtue of simply running, generates consciousness. In this scenario you could get consciousness even by removing one of the instructions. The algorithm would still be conscious and run until it reaches the missing instruction upon which it would stop running and consciousness would cease, upon replacing the missing instruction the algorithm could resume and consciousness would be restored because the program is now active again. b. Instead of complexity depending on the number of instructions, it could instead depend on the relationships between instructions. Say, for example, there is an algorithm in which a later instruction calls upon a previous instruction. There could be some web of feedback and feed forward connections which causally link instructions that are not immediately adjacent to each other. This scenario would provide the necessary causal power to missing instructions


    2. This assumes data processing can only happen in a turing machine like manner -- singular program running through a series of sequential steps. The brain apparently does not have this organization and runs, to my understanding, via large numbers of coordinated parallel processing units in a hierarchical arrangement with feedback connections and other complicated circuit connections. This would make the analogy not necessarily work as you now have to take into account sets of parallel [not in series] processors or 'turing machines' which interact with each other in a way that may not be clearly characterized in a series of sequential deterministic steps.. Perhaps this is why say, a neuron, which is a single processing unit is not capable of consciousness whereas a conglomerate of neurons is.

    I have no problem with empiricism. I see the role of philosophy as providing a consistent framework for understanding of all human experience. My observation is directed specifically at natural science, which I think is rightly described as focused on physical objects, or if you prefer, physical phenomena.

    Aristotle, who I think has made as much progress as anyone on understanding the nature of consciousness, based his work on experience, but treated our experience as subjects on an equally footing with our experience of physical objects.
    Why make the dichotomy between "natural" and "psychological" objects? I think psychological constructs that are well defined and have some clear physiological correlates [e.g. reward system and valence - limbic system; awareness - reticular activating system] are fair game for being considered 'natural phenomena'. I don't think there has to necessarily be a hard dichotomy. Besides, even in the physical sciences we don't have access to 'things in themselves' anyway, 'electron', 'proton' are known by virtue of the effect of their intrinsic properties on measurement devices and not by actually physically observing them. I feel this is analogous to the way in which we can't observe 'pleasure' or 'pain'. Of course those constructs are simply less well defined and less concrete, but -in the same way the atomic model was refined after more fine-grained experimentation- the psychological ones I feel can come somewhat closer to that in time. The point is that these fall within the range of natural objects albeit of a lesser degree as opposed to something wholly different so as to involve a completely different way of knowing or learning about them.
  • Undirected Intentionality
    So, can willpower bring oneself out of depression? What's your take on that issue?
    I think it can but it isn't necessary. Eg. you can have a family forcibly take you to a therapist every week and do the CBT homework in a case where you don't have the will to do it yourself. This can, over time, lead to a habit of going there. The willpower, which is needed most in the beginning when you need to effectively force yourself out of a habit of self-seclusion and negative self-talk, is not needed as much once it's become a habit to go to the therapist and work on the exercises.

    But if one has the willpower to do it alone, then it could lead to the same effect.
  • Intentional vs. Material Reality and the Hard Problem
    Dfpolis, thank you for the excellent post!


    I am not a dualist. I hold that human beings are fully natural unities, but that we can, via abstraction, separate various notes of intelligibility found in unified substances. Such separation is mental, not based on ontological separation. As a result, we can maintain a two-subsystem theory of mind without resort to ontological dualism.
    You explicitly state in the previous sentence the separation is [by substance?] mental. How would you categorize 'mental separation' if not as an ontological separation?

    1. Neurophysiological data processing cannot be the explanatory invariant of our awareness of contents. If A => B, then every case of A entails a case of B. So, if there is any case of neurophysiological data processing which does not result in awareness of the processed data (consciousness) then neurophysiological data processing alone cannot explain awareness. Clearly, we are not aware of all the data we process.
    Well I think this is a bit 'low resolution'/unspecific. It's definitively clear neurophysiological data alone isn't sufficient for awareness but that doesn't mean that a certain kind of neurophysiological processing is not sufficient - this is the bigger argument here.

    2. All knowledge is a subject-object relation. There is always a knowing subject and a known object. At the beginning of natural science, we abstract the object from the subject -- we choose to attend to physical objects to the exclusion of the mental acts by which the subject knows those objects. In natural science care what Ptolemy, Brahe, Galileo, and Hubble saw, not the act by which the intelligibility of what they saw became actually known. Thus, natural science is, by design, bereft of data and concepts relating to the knowing subject and her acts of awareness. Lacking these data and concepts, it has no way of connecting what it does know of the physical world, including neurophysiology, to the act of awareness. Thus it is logically impossible for natural science, as limited by its Fundamental Abstraction, to explain the act of awareness. Forgetting this is a prime example of Whitehead's Fallacy of Misplaced Concreteness (thinking what exists only in abstraction is the concrete reality in its fullness).
    I don't think the first sentence [of the two in bold] leads to the conclusion in the second sentence.

    Empiricism starts with defining a phenomenon -any phenomenon. Phemonema can be mental or physical or can even be some interaction between mental and physical [e.g. the reduction of awareness that results from taking a mind altering physical substance; the alteration of one's awareness of self and one's relation with objects of consciousness [e.g. feelings or things in consciousness] that can occurs with mind altering physical substance or anything else that leads to the latter type of experience - brain lesion, trauma to head, neurodegenerative disease, infection of head.] Certainly when a science deals with those sorts of phenomena they have a language to at least begin an inquiry into it and know how to collect data in order to test their hypotheses else they wouldn't begin.

    And, secondly, concepts can be taken from different levels of analysis - the psychological, the neurophysiological, the cellular. At present there is research that attempts to map language from one level to the other - e.g. the concept of 'memory' can now be explained partly through biophysical mechanisms - long term potentiation and depression of neurons in a certain kind of circuit; 'mental concepts or 'ideas' ' through the activation of a certain set of neurons in a hierarchically organized sensory circuit. So connections are in fact being attempted between what's traditionally been considered a 'mental field' e.g. psychology and 'physical' fields e.g. biophysics.

    3. The material and intentional aspects of reality are logically orthogonal. That is to say, that, though they co-occur and interact, they do not share essential, defining notes. Matter is essentially extended and changeable. It is what it is because of intrinsic characteristics. As extended, matter has parts outside of parts, and so is measurable. As changeable, the same matter can take on different forms. As defined by intrinsic characteristics, we need not look beyond a sample to understand its nature.
    To be orthogonal is to be completely independent of the other [for one to not be able to directly influence the other]. I gave examples of instances where physical objects result in changes in objects of experience and awareness itself. The fact that they can influence each other so plainly, I think, gives good credence to the fact that these two things are not orthogonal. And, more importantly, the fact that this is a unidirectional interaction [i.e. that only physical objects can result in changes to mental states and not the other way around without some sort of physical mediator] gives serious reason to doubt an fundamentality to the mental field - at least to me it's clear its an emergent phenomenon out of fundamental material interactions.

    Intentions do not have these characteristics. They are unextended, having no parts outside of parts. Instead they are indivisible unities. Further, there is no objective means of measuring them. They are not changeable. If you change your intent, you no longer have the same intention, but a different intention. As Franz Brentano noted, an essential characteristic of intentionality is its aboutness, which is to to say that they involve some target that they are about. We do not just know, will or hope, we know, will and hope something. Thus, to fully understand/specify an intention we have to go beyond its intrinsic nature, and say what it is about. (To specify a desire, we have to say what is desired.) This is clearly different from what is needed to specify a sample of matter.

    I'm unsure why intentions [my understanding of what you mean by intention is: the object of a mental act - judgement, perception, etc] are always considered without parts. I think, for example, a 'hope' is deconstruct-able, and [at least partly] composed of a valence component, an cognitive attitude of anticipation, a 'desire' or 'wanting' for a certain end to come about, the 'state of affairs' that defines the 'end'. and sometimes a feeling of 'confidence'. I can also imagine how this is biophysically instantiated [i.e. this intentional state is defined by a circuit interaction between certain parts of the reward system, cognitive system, and memory system]. So what you have is some emergent state [the mental state] composed of interacting elements.

    4. Intentional realities are information based. What we know, will, desire, etc. is specified by actual, not potential, information. By definition, information is the reduction of (logical) possibility. If a message is transmitted, but not yet fully received, then it is not physical possibility that is reduced in the course of its reception, but logical possibility. As each bit is received, the logical possibility that it could be other than it is, is reduced.

    The explanatory invariant of information is not physical.
    The same information can be encoded in a panoply of physical forms that have only increased in number with the advance of technology. Thus, information is not physically invariant. So, we have to look beyond physicality to understand information, and so the intentional realities that are essentially dependent on information.

    I think things are a bit more complicated and don't necessarily result in this conclusion. Implicit assumptions about the underlying theory of meaning [eg. logical atomism vs language-game] can influence how we make sense of this problem. I'm still forming my thoughts on this and this part of your post but I'll give you a response when I think of one.
  • General Mattis For President?
    Dems won the general election popular vote in 2016 even with the split in the voting community after bernie's loss. I don't think they'll have a problem voting a candidate in because there won't be the same kind of situation this time round and, from the house results, it's clear there's been an uptick in dem turnout.
  • Do we have a moral duty to use genetic engineering for species conservation?
    Given that humans acquired the knowledge and technology to genetically modify organisms which enables us to increase the fitness of a (critically) endangered species, do we have the moral right to do so?
    This scenario really highlights a moral paradox .. On face value both deontic or utilitarians could technically come to this sort of conclusion -- on animal rights ground it's important to reduce unnecessary suffering and death; on 'good' maximization grounds its important to maximize longevity/livelihood of animals. But effects of applying this sort of thing may, in the long run reduce fitness of animals via minimizing species diversity. This is in effect circumventing of selection constraints.
  • Undirected Intentionality
    I'm not quite sure; but, there is a distinction I want to draw out between willpower and intentionality. Intentionality stands above willpower in that motivation and willpower is directed at some goal. But, here I go on about "depression". When the way out of the bottle is unknown to the fly, then willpower seems like the only thing that intent can resort to. So, yeah, willpower, willpower, willpower.
    Hmm, I think the way you make sense of willpower is different than me. I don't think, for example, willpower or motivation is directed at a goal. You can get up 'feeling motivated', for example. In that case the motivation can be described as 'feeling driven' or 'excited/energetic'- like you are determined to get 'things' done - anything that comes in front of you not necessarily one thing in particular. Willpower, like I said before, seems like a 'capacity' or a general ability to control urges and manage actions - instead of eating a delicious pizza, deciding not to eat it.. instead of angrily lashing out at someone, showing restraint. It doesn't seem linked to a goal whereas intentions are always linked to a specific goal.. I can't think of someone saying to themself, without context or specific goal, 'I have intention'.. where as it makes sense for a person to say 'I have a lot of willpower' or 'I am/feel motivated'
  • Undirected Intentionality

    Are they?
    I'd say yes. The minimum you need for an intention is a decision to complete a goal. What else do you feel you'd need in order for an intention?

    Then, what is the deciding factor in getting better?
    I'd say willpower.

    The intent is still undirected. One doesn't know how the fly gets out of the bottle it is stuck in. Is it a matter of trial and error to try different methods of getting better or is the outcome of this undirected intentionality, manifest in trial and error, spontaneous? This would make "intent" somewhat synonymous with "willpower". But, the two aren't the same.
    Hmm, so I'm unsure what you are linking the directedness of an intention to. I am linking it to the goal of the intention - 'get out of the bottle' in the fly case, 'get better' in the human case. In that sense all intentions are directed. It doesn't matter the course of action or the way in which the goal is realized, only that a person or animal has decided - consciously or unconsciously- to complete a goal.
  • Undirected Intentionality
    Yes, but, again taking the example of "depression". I have the intent to get better but it is undirected, otherwise, people would be able to simply will themselves out of that state of mind, which is quite rare. Hence, what do you think about this "undirected" aspect of intent? It seems more like, as you mentioned, and an unconscious thing that is in the background and never entirely realized unless some goal is accomplished unknowingly.
    I think in the depression example the person has decided or intended to get better [decide or intend are synonymous] but has not put his decision into action because he lacks the willpower or has low capacity to direct his behavior toward his goals -- his ruminative and self-sabotaging habits are too ingrained for him to overcome at the moment.

    So the intention is still 'directed' toward the goal of getting better, but is not put into action
  • Undirected Intentionality
    What is intent? How much of intent is linked with willpower? Willpower seems like an active process, where intent can be passive.
    The object of an intent is a goal, and to intend to do something is to plan or decide to actively work toward that goal, at least that's my understanding. I don't see how intent can be passive... it can be unconscious in the sense that you may not be aware that a part of you intends to do something, but it always involves a decision of action to complete a goal

    Willpower is something different, it's the capacity to consciously direct actions and behavior in order to complete an explicit goal. It's linked to the concept of self control and seems distinct from intent in that intent is not a capacity and willpower isn't linked to an explicit or specific goal.

    My main question is about undirected intentionality. These seem to be the passive aspect of willpower, like having a goal in the back of one's mind and working towards it.

    For example, a deep mood that can be depression means that someone lacks the willpower to get better. I will stipulate here that this is 'undirected intentionality'.

    So, how is intent shaped and formed to become a goal?
    The bold sounds more like an unconscious intention verses a willpower. I still don't understand how an intention can be undirected since it seems by definition to always be directed toward the completion of a goal.
  • Low Unemployment, Slow Wage Growth

    So I'm not an economist, which may devalue my opinion but I fail to see why unemployment should correlate with wages. I mean, if anything at all, you may risk saturating a job market which could lower wages.