Fair enough, but since, as you point out, we do not know the laws of nature, how do we know they obey the Principle of conservation of energy? And is the Principle of conservation of energy, a Principle of physics or of nature? — Inis
And is the Principle of conservation of energy, a Principle of physics or of nature? — Inis
Also, I'm not sure the Principle of conservation of energy even tells you how to measure whether energy is conserved or not. — Inis
I think my issue stems from not being able to separate 'ontological independence' from logical orthogonality. I mean to assert that concepts and intentions exist and are distinct from their material instances and yet to then say these things are somehow still of same ontological type [i.e. physical] as physical objects seems difficult to reconcile [what makes them physical if they're not composed of or caused by physical material?]. It just seems like an unsubstantiated assertion that they are ontologically the same.That is not what I explained that I mean by concepts being orthogonal. I explicitly said, "... logically orthogonal. That is to say, that, though they co-occur and interact, they do not share essential, defining notes." Having non-overlapping sets of defining notes makes concepts orthogonal -- not the consideration of interactions in their instances, which is a contingent matter to be resolved by reflecting on experience.
Intentional states inform physical states but I mentioned before [and I think this is important] that this is always by virtue of a physical-material mechanism. There is activity pattern in higher level areas of brain which trickle down via some series of physical communication signals into a pattern of behavior. The 'seeming' ontological jump from intentional state [not-physical] to physical change in muscle activity is what I argue never happens because there must ultimately be some physical nature to that intentional state in order for it to lead to a physical change.Concepts are abstractions and do not "interact." All that concepts do (their whole being) is refer to their actual and potential instances. Still, it is clear to all but the most obdurate ideologues, that intentionality can inform material states. Whenever we voice a concept, when we speak of our intentions, our speech acts are informed by intentional states. Conversely, in our awareness of sensory contents, material states inform the resulting intentional states. So, the fact that intentional and material abstractions are orthogonal does not prevnt material and intentional states from interacting.
Again, I can't think of how this could happen without a physical mechanism. And in fact it is currently made sense of in terms of physical mechanisms [albeit coarse grained and drafted at present] - as a hypothetical mechanism: some web of 'concept-cells' [higher level cells in a feedforward neural circuit that invariantly fire in response to a very specific stimulus or object class] are activated in conjunction with reward circuitry and a motor-command sequence is initiated.This misses the fact that intentional states do inform material states. That we are writing about and discussing intentionality shows that intentional states can modify physical objects (texts, pressure waves, etc.)
Right but all of this goal directed decision making is ultimately mediated by physical processes happening in the brain. It also doesn't need to be determinate to be mediated by physical process.if I am commited, I will find other means. I planned on a certain route, encoded in my initial state, but as I turn the corner, I find my way blocked by construction. I find an alternate route to effect my intended end. In all of this, the explanatory invariant (which can revealed by controlled experiments) is not my initial physical state, but my intended final state. Clearly, intentional states can produce physical events.
Okay that makes sense. They certainly seem spatially dimensionless -- feelings and sentiments from a first person perspective, for example, seem to be present without any spatial location. I don't know biophysically how these types of things are encoded in a distributed, non localized fashion or in a temporal pattern of activity that doesn't have spatial dimension or etc so I couldn't say they are one or the other but I guess I'd say they could be spatially decomposable.To say that intentions have "no parts outside of parts" does not mean that they are simple (unanalyzable). It means that they do not have one part here and another part there (outside of "here"). My intention to to go to the store is analyzable, say, into a commitment and a target of commitment (what if is about, viz. arriving at the store.) But, my commitment and the specification of my commitment are not in different places and so are not parts outside of other parts.
How do you define 'biophysical support'? What in addition to that support would you say is needed for a full explanation?Of course my intention to go to the store has biophysical support. My claim is that its biophysical support alone is inadequate to fully explain it.
the contexts are different but, again they are both [the invariance of the goal and the ball's deterministic behavior] explainable by physical processes - some neurons are realizing a [physically instantiated] goal which is influencing via [probabilistic] physical interactions some other set of neurons which are informing behavior via other [probabilistic] physical interactions. The ball is a simple physical system which is directly being impacted by a relatively deterministic process.First, as explained in the scenario above, the invariance of the intended end in the face of physical obstacles shows that this is not a case covered by the usual paradigm of physical explanation -- one in which an initial state evolves deterministically under the laws of nature. Unlike a cannon ball, I do not stop when I encounter an obstacle. I find, or at least search for, other means. What remains constant is not the sum of my potential and kine
I am making broad-band metaphysical assumptions of materialism and emergentism which implies I take things like 'valence' and 'concepts' to be materially realized in physical systems. My defense of materialism is there is simply no evidence for any other substance in reality, and that everything -so far- that has seemed to be non-physical or have no physical basis has been shown to be mediated by physical process. My defense of emergentism is something like this.Second, you are assuming, without making a case, that many of the factors you mention are purely biophysical. How is the "valance component," as subjective value, grounded in biophysics? Especially when biophysics is solely concerned with objective phenomena? Again to have a "cognitive attitude" (as opposed to a neural data representation) requires that we actualize the intelligibility latent in the representation. What biophysical process is capable of making what was merely intelligible actually known -- especially given that knowledge is a subject-object relation and biophysics has no <subject> concept in its conceptual space?
Say you want a pizza. Pizza can be thought of as a circuit interaction between 'concept cells' [which -in turn- have activated the relevant visual, tactile, olfactory circuits that usually come online whenever you come into contact sensorily with pizza], particular reward pathway cells, cells which encode sets of motor commands. 'Wanting' could be perceived as signals from motor-command center which bombard decision-making circuits and compete against other motor-commands for control over behavior. All of these have an associated experience which themselves can be thought of as fundamental phenomena that are caused by the circuit interaction [e.g. pizza -- whatever is conjured when asked to imagine the concept: smells, visual content, taste; wanting-- 'feeling pulled' toward interacting with the object].Third, how is a circuit interaction, which is fully specified by the circuit's configuration and dynamics, "about" anything? Since it is not, it cannot be the explanation of an intentional state.
It suffices to think that, having once grasped it a posteriori, in an experienced example, we can see, that it applies in all future cases "a priori." — Dfpolis
I mean to assert that concepts and intentions exist and are distinct from their material instances and yet to then say these things are somehow still of same ontological type [i.e. physical] as physical objects seems difficult to reconcile [what makes them physical if they're not composed of or caused by physical material?]. It just seems like an unsubstantiated assertion that they are ontologically the same. — aporiap
Once you make the implicit assumption they are ontologically distinct then it becomes clear that any interaction between intentional states and physical substance serves as a counterargument to their being distinct from materiality [since material and nonmaterial have no common fundamental properties with which to interact with each other (charge; mass; etc)]. — aporiap
Intentional states inform physical states but I mentioned before [and I think this is important] that this is always by virtue of a physical-material mechanism. — aporiap
Dean Radin and Roger Nelson (1989) reviewed 832 experiments by 68 investigators in which subjects were asked to control random number generators, typically driven by radioactive decay. They subjected the results to meta-analysis, a method for combining data from many experiments. While control runs showed no significant effect, the mean effect of subjects trying to influence the outcome was 3.2 x 10^-4 with Stouffer’s z = 4.1. In other words, subjects controlled an average of 32 of every 100,000 random numbers, and this effect is 4.1 standard deviations from pure chance. The odds against this are about 24,000 to 1.
Radin and Diane C. Ferrari (1991) analyzed 148 studies of dice throwing by 52 investigators involving 2,592,817 throws, found an effect size (weighted by methodological quality ) of 0.00723 ± 0.00071 with z = 18.2 (1.94 x 10^73 to 1). Radin and Nelson (2003) updated their 1989 work by adding 84 studies missed earlier and 92 studies published from 1987 to mid-2000. This gave 515 experiments by 91 different principal investigators with a total of 1.4 billion random numbers. They calculated an average effect size of 0.007 with z = 16.1 (3.92 x 10^57 to 1).
Bösch, Steinkamp, and Boller (2006) did a meta-analysis of 380 studies in an article placing experiments in the context spoon bending and séances. They excluded two-thirds of the studies considered. Nonetheless, they found high methodological quality, and a small, but statistically significant effect.
The 'seeming' ontological jump from intentional state [not-physical] to physical change in muscle activity is what I argue never happens because there must ultimately be some physical nature to that intentional state in order for it to lead to a physical change. — aporiap
And in fact it is currently made sense of in terms of physical mechanisms [albeit coarse grained and drafted at present] - as a hypothetical mechanism: some web of 'concept-cells' [higher level cells in a feedforward neural circuit that invariantly fire in response to a very specific stimulus or object class] are activated in conjunction with reward circuitry and a motor-command sequence is initiated. — aporiap
Right but all of this goal directed decision making is ultimately mediated by physical processes happening in the brain. It also doesn't need to be determinate to be mediated by physical process. — aporiap
I don't know biophysically how these types of things are encoded in a distributed, non localized fashion or in a temporal pattern of activity that doesn't have spatial dimension or etc so I couldn't say they are one or the other but I guess I'd say they could be spatially decomposable. — aporiap
How do you define 'biophysical support'? What in addition to that support would you say is needed for a full explanation? — aporiap
the contexts are different but, again they are both [the invariance of the goal and the ball's deterministic behavior] explainable by physical processes - some neurons are realizing a [physically instantiated] goal which is influencing via [probabilistic] physical interactions some other set of neurons which are informing behavior via other [probabilistic] physical interactions. The ball is a simple physical system which is directly being impacted by a relatively deterministic process. — aporiap
I am making broad-band metaphysical assumptions of materialism and emergentism which implies I take things like 'valence' and 'concepts' to be materially realized in physical systems. — aporiap
Say you want a pizza. Pizza can be thought of as a circuit interaction between 'concept cells' [which -in turn- have activated the relevant visual, tactile, olfactory circuits that usually come online whenever you come into contact sensorily with pizza], particular reward pathway cells, cells which encode sets of motor commands. — aporiap
Which is PRECISELY the error Kant points out regarding Hume’s characterization of the principle cause and effect. — Mww
a principle being grounded in pure reason, as are all principles whatsoever, absolutely **must** have it’s proof also given from pure reason. — Mww
Kant’s argument wasn’t that there IS a proof per se, but rather no empirical predicates at all can be attributed to a possible formulation of it. — Mww
Kant’s argument was that the thesis of which Hume was aware (a priori judgements do exist), having been considered, was summarily rejected (slave of the passions and all that happy crappy) because it wasn’t considered **as it ought to have been**. In other words, he didn’t consider it the right way. — Mww
I shall not insult your intelligence by informing you the human cognitive system is already in possession of a myriad of pure a priori principles of the kind Hume failed to address, first and foremost of which is, quite inarguably, mathematics. — Mww
And as a final contribution, I submit there is no logical reason to suppose cause and effect should lend itself to being differentiated between kinds, with all due respect to Aristotle. — Mww
Isn’t a proposition where the subject and predicate describe the same event and contain the same information a mere tautology? — Mww
It’s not that the relationships are contingent; it’s that instances that sustain a principle governing them are. If cause and effect is an intelligible relationship prior to our knowledge of it’s instances, doesn’t it’s very intelligibility mandate such relationship be necessarily a priori? — Mww
This misses the fact that intentional states do inform material states. That we are writing about and discussing intentionality shows that intentional states can modify physical objects (texts, pressure waves, etc.) — Dfpolis
Again, I can't think of how this could happen without a physical mechanism. — aporiap
They're empirically supported, I didn't conjure these things up: https://en.m.wikipedia.org/wiki/Grandmother_cell . It's a set of spatially distributed, sparsely firing neurons which activate when particular category of object - faces, hands, etc. - are presented irregardless of form of perception (whether it's the name 'face' that is heard, an image of a face seem, a face that is felt).A key position in the Age of Reason was the rejection of "occult" properties, but here you are positing "concept cells" as a cover for abject ignorance of how any purely material structure can explain the referential nature of intentional realities. Where are these concept cells? How do they work? They are as irrational and ungrounded as the assumption of homunculi.
If for every intentional state, there is a corresponding physical state and vice versa, then it could be said, as Spinoza does, that they are the same thing seen from two different perspectives. If this is right then to say either that physical states cause intentional states or that intentional states inform physical states would be to commit a category error. — Janus
They're empirically supported — aporiap
The first has to do with the fact that that every instance of sensory awareness has a twofold object, violating your assumption of a one-to-one correspondence of physical and intentional states. — Dfpolis
Kant's assumption that we have a priori knowledge is inadequate grounds for calling Hume's position an error — Dfpolis
(No, not literally unthinkable, for reason has no power to not think. Reason’s sole domain is to enable thinking correctly, which means understanding does not confuse itself with contradictions.)But, Kant wants more than the principle of causality to be known a priori......
(Correct, he would wish all principles whatsoever be known a priori)
.......... He wants it to be imposed by the mind so that its contrary is literally unthinkable. — Dfpolis
(Reason does not conclude, that being the sole domain of judgement. While judgement is a part of the total faculty of reason, it is improper to attribute to the whole that which properly belongs to the particular function of one of its parts. In this much I grant: without categories reason has no means to, and therefore cannot, derive transcendental principles.)Pure reason is reason without data........
(The data of pure reason are categories, without which reason and indeed all thought, is impossible)
.........Lacking grist, it can conclude nothing, not even transcendental principles. — Dfpolis
So, causality, space and time are not forms imposed on reality by the mind, but empirically derived concepts. — Dfpolis
In other words, Hume did not agree with Kant's assumptions. — Dfpolis
consistent because it is grounded in the experience of counting. — Dfpolis
If mathematics were known a priori, there would be no reason to question it. — Dfpolis
I did not say that the subject and predicate contained the same information, but that they had the identical object as their referent. — Dfpolis
(True. I know the principle “all bodies are extended” is true without the experience of a body informing me, otherwise I would only be entitled to say “this body is extended”. I know all bodies are extended not because of *this* body I perceive, but rather because the concept of empirical bodies in general must have the pure concept of “extension” belonging to it, in order to be intuited as “body” at all. “Extension” is hardly empirical, so any knowledge of principles connected with it must be a priori.Something may be true transcendentally (true of all existents)........
(This is not what transcendentally means to me)
........, but it is not a priori unless we know it without the experience of an existent informing us. — Dfpolis
These are two descriptions of the one process. From a phenomenological perspective we can say that something about the tree caught your attention, and to stop and look at it, which in turn triggered associations which led to you having a series of thought about it. — Janus
The point is that it is a category error to say that the physical and physiological process cause you to think certain thoughts, because it is other thought and memory associations which cause that. — Janus
The point is that they are two different types of analysis best kept separate, and confusion and aporias often result when talk of causation operating across the two kinds of analysis is indulged in. — Janus
Yes, it is, because a priori knowledge derives from universality and necessity, which Hume’s empirical grounds, with respect to cause and effect, do not and can not possibly afford. — Mww
(No, not literally unthinkable, for reason has no power to not think. Reason’s sole domain is to enable thinking correctly, which means understanding does not confuse itself with contradictions.) — Mww
The data of pure reason are categories, without which reason and indeed all thought, is impossible
Reason does not conclude, that being the sole domain of judgement. While judgement is a part of the total faculty of reason, it is improper to attribute to the whole that which properly belongs to the particular function of one of its parts. In this much I grant: without categories reason has no means to, and therefore cannot, derive transcendental principles. — Mww
While they describe the same process, they describe different aspects of that process. The physical description deals with the actualization of sensibility, while the phenomenal description requires the actualization of intelligibility. This difference is critical. To think of the tree, I need not only sense the tree, but also actualize its intelligibility to form the idea <tree>. Describing how light scattered from the tree modifies my neurophysical state, and how that state is neurally processed, elaborates the sensory aspect of the process, but says nothing of the actualization of intelligibility required to think <tree>.
There is no doubt that these processes are correlated, and since the time of Aristotle, it has been recognized that the intentional state (the idea) is dependent on the sensory representation (which he called the phantasm). But, correlation and dependence are not identity. — Dfpolis
I see not reason to keep them separate. They are two projections of the same reality. Each is partial and incomplete. The best thing to do is compare them for points of agreement, and then determine what each projection grasps that the other misses. Doing so leads to a more complete model of reality. — Dfpolis
I haven't said that the two processes, the intentional and the physical, are identical. I have said they are correlated, and that each has its own respective account which is unintelligible in terms of the other. — Janus
Whether the intentional is dependent on the physical or the physical on the intentional is ultimately an unanswerable question. — Janus
It's not a matter of "keeping them separate"; they are separate. — Janus
I cannot agree that they are unintelligible in terms of each other. — Dfpolis
Are you thinking of cosmogenesis or ideogenesis? — Dfpolis
No, they are not separate. They are distinct ways of thinking about one and the same topic, just as wave mechanics and matrix mechanics are different ways of conceptualizing quantum events, or aerodynamics and manufacturing logistics are different approaches the production of an airplane. Different ways of thinking are complementary. — Dfpolis
I think experience falsifies this claim. We all make errors in reasoning. Logic enables us to discover those errors. — Dfpolis
counting does not depend on what is counted — Dfpolis
As an example, your reasons for doing something or thinking something is not intelligible in terms of neural processes. — Janus
You think what you do for reasons, neural processes do not cause you to think the way you do, even though neural processes are arguably correlated with your thinking. — Janus
we cannot parse any relationship between causes and reasons, because the former is predicated on determinism and the latter on freedom; neither of which can be understood in terms of the other. — Janus
We might be able to give an intelligible account of the succession of neural states, and although they may be understood to be in a causal series, they cannot be meaningfully mapped as causes onto the successive phases of the movement of thought in a way that explains a relationship between the physical succession of causes and the intentional succession of associations and reasons. — Janus
Are you thinking of cosmogenesis or ideogenesis? — Dfpolis
I'm thinking of cosmogenesis. — Janus
The point is that being distinct ways of thinking, any attempt to unite them breeds confusion. — Janus
Experience falsifies the claim if I’d said “reason’s sole domain is to *force* thinking correctly”. A set of logical rules doesn’t come with the promise of their use, only that we’re better off if we do. — Mww
counting does not depend on what is counted — Dfpolis
Why isn’t this just like “seeing does not depend on what is seen”? Seeing or counting is an actual physical act, and mandates that the objects consistent with the act be present. Now, “the ability to see or to count does not depend on what is seen or counted” seems to be true, for I do not lose my visual receptivity simply because my eyes are closed. Otherwise, I would be forced into the absurdity of having to learn each and every object presented to sensibility after each and every interruption of it.
Are you saying counting and the ability to count are the same thing? — Mww
The categories are the same for Kant as they were for Aristotle. My mistake if I got the impression you were a fan of Aristotle, hence I didn’t feel the need to define them. — Mww
What is Cosmogenesis and who is the authority for it? What is ideogenesis and who is te authority for that? — Mww
But, if you mean that they are unrelated to neurophysical processes, I cannot agree. — Dfpolis
Many desires are biologically based and neurophysically represented. These representations are not the reasons, but awareness of the represented states may well be reasons for my choosing to act in a specific way. So, neurophysical representations and processing play a causal role in the reasons I consider. — Dfpolis
By brain will represent these facts in a way that can inform me that I am hungry; however, it cannot force me to turn my attention to, to be come aware of, this intelligible representation. — Dfpolis
Since these memories are activated in response to a free decision, it is inadequate to think of the brain as a deterministic machine. — Dfpolis
It is much more rational to say that my decision to eat now causes the brain to activate neural complexes encoding what there is to eat now. — Dfpolis
Of course this is not a generally analysis of cosmogenesis, but it does point to intentional reality as more fundamental than material reality. — Dfpolis
Still, the fact that we know and can affect physical reality shows that, unlike mathematics and poetry, they are dynamically linked. — Dfpolis
First, because counting is an intellectual operation, while seeing is a physical operation, — Dfpolis
For Aristotle, the categories are different ways in which something can be said to "be." — Dfpolis
No, I have been saying they are correlated; which obviously means they are not unrelated. What I am saying is that the relationship is not causal. — Janus
(I think "presented" would be a better term here). — Janus
awareness of the states is not the same as the states — Janus
If the states are preconceptual then they cannot serve as reasons for action. — Janus
By brain will represent these facts in a way that can inform me that I am hungry; however, it cannot force me to turn my attention to, to be come aware of, this intelligible representation. — Dfpolis
I think this is an example of anthropomorphic thinking. — Janus
Your mind may "represent" the facts or it may not; I don't think it is right to say that that the brain "represents" anything. Often you will simply eat without being aware of any reason to do so, but of course it is possible to think something like "I am hungry, I should eat something". — Janus
I'm not claiming that the brain is a deterministic machine; it may well be an indeterministic organ, but the point is that there is no "I" that is directly aware of neural processes such that it could direct them. — Janus
So, the succession of brain states is determined by nature, not by ourselves, and thus, as far as we are concerned, it is a deterministic process. — Janus
It is much more rational to say that my decision to eat now causes the brain to activate neural complexes encoding what there is to eat now. — Dfpolis
This is where the category error comes in. Your decision to eat now has its own correlated brain state from which the "neural complexes encoding what there is to eat now" ensues causally. This is a deterministic process (as far as we are concerned because we are not directly aware of it and cannot direct it). — Janus
But, your decision to eat now gives you reason to seek what there is to eat now. These are two different ways of understanding what is going on; the first in terms of causes, the second in terms of reasons. — Janus
I think the two are co-arising and co-dependent. In other words, the "zero point field" or "quantum foam" or "akashic field" or "implicate order" or whatever you want to call it, cannot be without there being a correlated material existence. — Janus
Still, the fact that we know and can affect physical reality shows that, unlike mathematics and poetry, they are dynamically linked. — Dfpolis
To my mind you are still thinking dualistically here. We are 'part and parcel" of the physical world and the informational world; I would say there is no real separation; and dynamism abounds but it is not ultimately in the form of "links" between things which are separate or separable. — Janus
I suppose counting could be construed as an intellectual operation, in as much as I am connecting an a priori representation of quantity to spatially distinguishable objects. On the other hand, I don’t agree that seeing is a physical operation, in as much as an object impressed on a bunch of optic nerves can be called seeing. Is it merely convention that the intellect is required to call up an internal object to correspond to the impression, in order to say I am in fact seeing — Mww
I agree that these physical states are not efficient causes of thought; however, as bearing intelligibility, they can inform awareness and so may be seen as informing causes. — Dfpolis
Since we have to teach children to count by counting specific kinds of things, I see no reason to think that there is any a priori component to counting. — Dfpolis
So, something like aristotelian realism about universals? Well that would make them more than a mere insignificant mental abstraction, it's a real thing in the world by your take, albeit inextricably linked to the particular. I'm not familiar with terms like 'notes of comprehension' or 'essential notes'. You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension. Can you unpack this term?It is Moderate Realism, which sees universal concepts grounded in the objective character of their actual and potential instances rather than in Platonic Ideas or Neoplatonic Exemplars. Nominalism and conceptualism see universals as categories arbitrarily imposed by individual fiat or social convention.
I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.] As soon as the instruction is removed, thus it ceases being a sampler and, suddenly would become a sampler [because it now has the capacity to sample] once the instruction is re-introduced. Even though it runs through the entire program in the thought experiment, during the period when the instruction is removed, the program is in a state where it no longer has the looping/iterative-sampling capacity hence the fact that it is not a sampler during that period.No. Notice that we run all the original instructions. Any program that simply runs an algorithm runs it completely. So, your 'atmospheric sampler' program does everything needed to complete its computation.
What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.The problem is, we have no reason to assume that the generation of consciousness is algorithmic. Algorithms solve mathematical problems -- ones that can be presented by measured values or numerically encoded relations. We have no such representation of consciousness. Also, data processing operates on representations of reality, it does not operate on the reality represented. So, even if we had a representation of consciousness, we would not have consciousness.
These choices are not exhaustive.. Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order. It's okay for some of the frames to be skipped because the viewer can infer the scene from the adjacent frames. In this instance the program is a movie player not because of the mere presence of the instructions nor because of the output of one or another frame [be it the middle frame or the last frame]. It also couldn't just result from only some of the instructions running, it requires them all to run properly for at least most [a somewhat arbitrary, viewer-dependent number] of the iterations so that enough frames are output for the viewer to see some semblance of a movie. In this case it's not the output of one loop that results in consciousness nor the output of some pre-specified number of sequential iterations that results in the program being a movie player. Instead it is a combination of a working program and some number of semi-arbitrary and not-necessarily sequential outputs which result in the program being a movie player. This is not even a far-out example, it's easy to imagine a simple, early american projector which operates via taking film-strip.. Perhaps sections of the film-strip are damaged which leads to inadequate projection of those frames. Would you say this projector is not a movie-player if you took out one of its parts before it reached the step where it's needed and then impossibly becomes a movie-player once the part is re-introduced right before it was needed?In the computational theory of mind, consciousness is supposed to be an emergent phenomenon resulting from sufficiently complex data processing of the right sort. This emergence could be a result of actually running the program, or it could be the result of the mere presence of the code. If it is a result of running the program, it can't be the result of running only a part of the program, for if the part we ran caused consciousness, then it would be a shorter program, contradicting our assumption. So, consciousness can only occur once the program has completed -- but then it is not running, which means that an inoperative program is causes consciousness.
I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.We are left with the far less likely scenario in which the mere presence of the code, running or not, causes consciousness. First, the presence of inoperative code is not data processing, but the specification of data processing. Second, because the code can be embodied in any number of ways, the means by which it effects consciousness cannot be physical. But, if it can't be physical, and it's not data processing, what is the supposed cause?
I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described? Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality. I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.No, not at all. It only depends on the theorem that all finite state machines can be represented by Turing machines. If we are dealing with data processing per se, the Turing model is an adequate representation. If we need more than the Turing machine model, we are not dealing with data processing alone, but with some physical property of the machine.
I agree that the brain uses parallel processing, and might not be representable as a finite state machine. Since it is continually "rewiring" itself, its number of states may change over time, and since its processing is not digital, its states may be more continuous than discrete. So, I am not arguing that the brain is a finite state machine. I am arguing against those who so model it in the computational theory of mind.
I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem. Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] . In this case it's the relations between the parts which result in the phenomenon cannot be reducible to simply the parts. I'd imagine there's some sort of way you can account for consciousness by the interactions of the component neurons in the systemThis assumes facts not in evidence. David Chalmers calls this the "Hard Problem" because not only do we have no model in which a conglomerate of neurons operate to produce consciousness, but we have no progress toward such a model. Daniel Dennett argues at length in Consciousness Explained that no naturalistic model of consciousness is possible.
Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentiallyIt is also clear that a single physical state can be the basis for more than one intentional state at the same time. For example, the same neural representation encodes both my seeing the cat and the cat modifying my retinal state.
Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain. I think this is the only difference we have. I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain."Dichotomy" implies a clean cut, an either-or. I am not doing that. I see the mind, and the psychology that describes it, as involving two interacting subsystems: a neurophysical data processing subsystem (the brain) and an intentional subsystem which is informed by, and exerts a degree of control over, it (intellect and will). Both subsystems are fully natural.
There is, however, a polarity between objects and the subjects that are aware of them.
I'm not entirely familiar with the Kantian thesis here, but I think the fact that our physical models [and that the entities within the models] change with updated evidence and the fact that fundamental objects seem to hold contradictory properties - wave-particle nature imply that theoretical entities like the 'atom' etc are constructs. Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.Please rethink this. Kant was bullheaded in his opposition to Hume's thesis that there is no intrinsic necessity to time ordered causality. As a result he sent philosophy off on a tangent from which it is yet to fully recover.
The object being known by the subject is identically the subject knowing the object. As a result of this identity there is no room for any "epistic gap." Phenomena are not separate from noumena. They are the means by which noumena reveal themselves to us.
We have access to reality. If we did not, nothing could affect us. It is just that our access is limited. All human knowledge consists in projections (dimensionally diminished mappings) of reality. We know that the object can do what it is doing to us. We do not know all the other things it can do.
We observe everything by its effects. It is just that some observations are more mediated than others.
I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.This is very confused. People have learn about themselves by experiencing their own subjectivity from time immemorial. How doe we know we are conscious? Surely not by observations of our physical effects. Rather we know our subjective powers because we experience ourselves knowing, willing, hoping, believing and so on.
I jumped the gun by saying they are empirically supported. But as you can see I didn't conjure them up! The more accurate thing to say is that there are neurons in higher-level brain regions which fire selectively to seemingly abstract stimuli. Whether that indicates they fire in response to a given 'concept' or in response to a given feature shared between all those stimuli [e.g. the presence of 'almond-shapes' eyes] or some other feature coincidentally related to the 'category' of the stimuli presented is not known.No, they are hypothetical. Your Wikipedia reference says "The grandmother cell, sometimes called the "Jennifer Aniston neuron," is a hypothetical neuron that represents a complex but specific concept or object."
The support cited in the article is behavioral (which is to say physical), with no intentional component. I am happy to agree that behavioral events require the firing of specific neural complexes. The problem is, a concept is not a behavior, but the awareness of well-defined notes of intelligibility. The article offers no evidence of awareness of the requisite sort.
I would say these states are correlated with awareness, or even that they are awareness looked at in an objective, as opposed to a subjective, way. — Janus
We are informed by what we see and our reasons for saying what we do about what we see are on account of what we see, not on account of those objective processes, of which we are completely unaware until we have understood some science of optics, visual perception and neuroscience. — Janus
What mechanism is the child using to relate a word he hears to an object he sees, in a system of quantitative analysis, that doesn’t have an a priori component? How does he understand exactly what he’s doing, as opposed to simple learning by rote? What do I say to my child, if after saying, “count this as one, these as two.....”, he asks, “what’s a two?” — Mww
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.