That is your particular intepretation of the problem. David Chalmer’s original paper doesn’t say that. — Wayfarer
He never says that the problem is what it is like to be a conscious individual that isn’t ourselves. — Wayfarer
Which he proposes as a 'naturalistic dualism'. The key point being the emphasis on 'experience' which is by nature first-person. — Wayfarer
This question is similar to asking why H2O is wet — Wolfgang
Centralization in the brain brought with it the need for a feedback mechanism that made it possible to consciously perceive incoming stimuli – consciousness, understood as the ability to sense stimuli. — Wolfgang
Actually, direct realism is part of the hard problem. In asserting that you see the world as it is - as static objects and physical brains, and comparing that to how the mind appears and is described as being non-physical and immaterial is how the hard problem arises because it does not account for causation and that causes are not their effects and vice versa. I have argued that the distinction between direct and indirect is incoherent. What does it even mean to directly or indirectly access something? I asked you what an observer is, and you didn't answer the question.In my posts above I'm arguing against the property dualism that is implied in the so called hard problem of consciousness. The problem reappears also in epistemological forms of dualism, such as in indirect realism, or in any philosophy in which it is assumed that consciousness is inaccessible to our knowledge.
Those are not my problems. I'm a direct realist, and a monist, so there's no need for you to give me a lecture on the monist nature of the world. Likewise, when I'm talking of subjective and objective in their ontological and epistemological senses, I'm not trying to split the world in two. In a monist world, things can have different modes of existing, and some things are observer-dependent (e.g. money) while other things (e.g. mountains) exist regardless of observers. But thanks anyway — jkop
Ok, Husserl might not seem to be a dualist, but the assumption that consciousness is immaterial in the sense that it never appears as an object in a world of objects, implies an epistemological dualism, and the hard problem reappears. For if consciousness is immaterial, then it seems we have no way of knowing what it's like to be another observer, or how immaterial experiences arise in a material world. — jkop
For idealists for whom everything is consciousness, the hard problem does not arise from a metaphysical or epistemological wedge. Likewise, it doesn't arise for direct realists under the assumption that we see objects directly — jkop
For example, a bird observing its environment,, birdwatchers observing the bird, a prison guard observing prisoners, a solo musician observing his own playing, an audience observing the musician, scientists observing their experiments, a thinker observing his own thinking (e.g. indirectly via its effects). — jkop
For the player in action the football field is not an ‘object,’ that is, the ideal term which can give rise to an indefinite multiplicity of perspectival views and remain equivalent under its apparent transformations. It is pervaded with lines of force (the ‘yard lines’; those which demarcate the ‘penalty area’) and articulated in sectors (for example, the ‘openings’between the adversaries) which call for a certain mode of action and which initiate and guide the action as if the player were unaware of it. The field itself is not given to him, but present as the immanent term of his practical intentions; the player becomes one with it and feels the direction of the ‘goal,’for example, just as immediately as the vertical and the horizontal planes of his own body. It would not be sufficient to say that consciousness inhabits this milieu. At this moment consciousness is nothing other than the dialectic of milieu and action. Each maneuver undertaken by the player modifies the character of the field and establishes in it new lines of force in which the action in turn unfolds and is accomplished, again altering the phenomenal field. (Merleau-Ponty, 1942/1963, pp. 168–9, emphasis added) — Quoted in Précis of Mind in Life: Biology, Phenomenology, and the Sciences of Mind, Evan Thompson
So, something like aristotelian realism about universals? Well that would make them more than a mere insignificant mental abstraction, it's a real thing in the world by your take, albeit inextricably linked to the particular. I'm not familiar with terms like 'notes of comprehension' or 'essential notes'. You say that logical distinction is predicated on the fact that intentional objects like concepts are different from materiality not ontologically but by virtue of not sharing these notes of comprehension. Can you unpack this term?It is Moderate Realism, which sees universal concepts grounded in the objective character of their actual and potential instances rather than in Platonic Ideas or Neoplatonic Exemplars. Nominalism and conceptualism see universals as categories arbitrarily imposed by individual fiat or social convention.
I mentioned in the post that it poses a problem for programs which require continual looping or continual sampling. In this instance the program would cease being an atmospheric sampler if it lost the capability of iteratively looping because it would then loose the capability to sample [i.e. it would cease being a sampler.] As soon as the instruction is removed, thus it ceases being a sampler and, suddenly would become a sampler [because it now has the capacity to sample] once the instruction is re-introduced. Even though it runs through the entire program in the thought experiment, during the period when the instruction is removed, the program is in a state where it no longer has the looping/iterative-sampling capacity hence the fact that it is not a sampler during that period.No. Notice that we run all the original instructions. Any program that simply runs an algorithm runs it completely. So, your 'atmospheric sampler' program does everything needed to complete its computation.
What do you mean they solve mathematical problems only? There are reinforcement learning algorithms out now which can learn your buying and internet surfing habits and suggest adverts based on those preferences. There are learning algorithms which -from scratch, without hard coded instruction- can defeat players at high-level strategy games, without using mathematical algorithms.The problem is, we have no reason to assume that the generation of consciousness is algorithmic. Algorithms solve mathematical problems -- ones that can be presented by measured values or numerically encoded relations. We have no such representation of consciousness. Also, data processing operates on representations of reality, it does not operate on the reality represented. So, even if we had a representation of consciousness, we would not have consciousness.
These choices are not exhaustive.. Take an algorithm which plays movies for instance. Any one iteration of the loop outputs one frame of the movie... The movie, here, is made by viewing the frames in a sequential order. It's okay for some of the frames to be skipped because the viewer can infer the scene from the adjacent frames. In this instance the program is a movie player not because of the mere presence of the instructions nor because of the output of one or another frame [be it the middle frame or the last frame]. It also couldn't just result from only some of the instructions running, it requires them all to run properly for at least most [a somewhat arbitrary, viewer-dependent number] of the iterations so that enough frames are output for the viewer to see some semblance of a movie. In this case it's not the output of one loop that results in consciousness nor the output of some pre-specified number of sequential iterations that results in the program being a movie player. Instead it is a combination of a working program and some number of semi-arbitrary and not-necessarily sequential outputs which result in the program being a movie player. This is not even a far-out example, it's easy to imagine a simple, early american projector which operates via taking film-strip.. Perhaps sections of the film-strip are damaged which leads to inadequate projection of those frames. Would you say this projector is not a movie-player if you took out one of its parts before it reached the step where it's needed and then impossibly becomes a movie-player once the part is re-introduced right before it was needed?In the computational theory of mind, consciousness is supposed to be an emergent phenomenon resulting from sufficiently complex data processing of the right sort. This emergence could be a result of actually running the program, or it could be the result of the mere presence of the code. If it is a result of running the program, it can't be the result of running only a part of the program, for if the part we ran caused consciousness, then it would be a shorter program, contradicting our assumption. So, consciousness can only occur once the program has completed -- but then it is not running, which means that an inoperative program is causes consciousness.
I don't think the multiple realization argument holds here.. it could just be something like a case of convergent evolution, where you have different configurations independently giving rise to the same phenomenon - in this case consciousness. Eg. cathode ray tube TV vs digital TV vs some other TV operate under different mechanisms and yet result in the same output phenomenon - image on a screen.We are left with the far less likely scenario in which the mere presence of the code, running or not, causes consciousness. First, the presence of inoperative code is not data processing, but the specification of data processing. Second, because the code can be embodied in any number of ways, the means by which it effects consciousness cannot be physical. But, if it can't be physical, and it's not data processing, what is the supposed cause?
I am not in the field of computer science but from just this site I can see there are at least three different kinds of abstract computational models. Is it true that physical properties of the machine are necessary for all the other models described? Even if consciousness required certain physical features of hardware, why would that matter for the argument since your ultimate goal is not to argue for the necessity of certain physical properties for consciousness but instead for consciousness as being fundamentally intentional and (2) that intentionality is fundamentally distinct from [albeit co-present with] materiality. I actually think my personal thought is not that different to yours but I don't think of intentionality as so distinct as to not be realized by [or, a fundamental property of] the activity of the physical substrate. My view is essentially that of Searle but I don't think consciousness is only limited to biological systems.No, not at all. It only depends on the theorem that all finite state machines can be represented by Turing machines. If we are dealing with data processing per se, the Turing model is an adequate representation. If we need more than the Turing machine model, we are not dealing with data processing alone, but with some physical property of the machine.
I agree that the brain uses parallel processing, and might not be representable as a finite state machine. Since it is continually "rewiring" itself, its number of states may change over time, and since its processing is not digital, its states may be more continuous than discrete. So, I am not arguing that the brain is a finite state machine. I am arguing against those who so model it in the computational theory of mind.
I don't understand why a neuron not being conscious but a collection of neurons being conscious automatically leads to the hard problem. Searle provides a clear intuitive solution here in which it's an emergent property of a physical system in the same way viscosity or surface tension are emergent from lower-level interactions- it's the interactions [electrostatic attraction/repulsion] which, summatively result in an emergent phenomenon [surface tension] . In this case it's the relations between the parts which result in the phenomenon cannot be reducible to simply the parts. I'd imagine there's some sort of way you can account for consciousness by the interactions of the component neurons in the systemThis assumes facts not in evidence. David Chalmers calls this the "Hard Problem" because not only do we have no model in which a conglomerate of neurons operate to produce consciousness, but we have no progress toward such a model. Daniel Dennett argues at length in Consciousness Explained that no naturalistic model of consciousness is possible.
Well the retinal state is encoded by a different set of cells than the intentional state of 'seeing the cat' - the latter would be encoded by neurons within a higher-level layer of cells [i.e. cells which receive iteratively processed input from lower-level cells] whereas the raw visual information is encoded in the retinal cells and immediate downstream area of early visual cortex. You could have two different 'intentional states' encoded by different layers of the brain or different sets of interacting cells. The brain processes in parallel and sequentiallyIt is also clear that a single physical state can be the basis for more than one intentional state at the same time. For example, the same neural representation encodes both my seeing the cat and the cat modifying my retinal state.
Okay but you seem to imply in some statements that the intentional is not determined by or realized by activity of the brain. I think this is the only difference we have. I would say intentional state can be understood as some phenomenon that is caused by / emerges from a certain kind of activity pattern of the brain."Dichotomy" implies a clean cut, an either-or. I am not doing that. I see the mind, and the psychology that describes it, as involving two interacting subsystems: a neurophysical data processing subsystem (the brain) and an intentional subsystem which is informed by, and exerts a degree of control over, it (intellect and will). Both subsystems are fully natural.
There is, however, a polarity between objects and the subjects that are aware of them.
I'm not entirely familiar with the Kantian thesis here, but I think the fact that our physical models [and that the entities within the models] change with updated evidence and the fact that fundamental objects seem to hold contradictory properties - wave-particle nature imply that theoretical entities like the 'atom' etc are constructs. Of course the measurables are real and so are their relations- which are characterized in equations; but the actual entities may just be theoretical.Please rethink this. Kant was bullheaded in his opposition to Hume's thesis that there is no intrinsic necessity to time ordered causality. As a result he sent philosophy off on a tangent from which it is yet to fully recover.
The object being known by the subject is identically the subject knowing the object. As a result of this identity there is no room for any "epistic gap." Phenomena are not separate from noumena. They are the means by which noumena reveal themselves to us.
We have access to reality. If we did not, nothing could affect us. It is just that our access is limited. All human knowledge consists in projections (dimensionally diminished mappings) of reality. We know that the object can do what it is doing to us. We do not know all the other things it can do.
We observe everything by its effects. It is just that some observations are more mediated than others.
I was trying to say that introspection is not the only way to get knowledge of conscious experience. I'm saying it will be possible [one day] to scan someone's brain, decode some of their mental contents and figure out what they are feeling or thinking.This is very confused. People have learn about themselves by experiencing their own subjectivity from time immemorial. How doe we know we are conscious? Surely not by observations of our physical effects. Rather we know our subjective powers because we experience ourselves knowing, willing, hoping, believing and so on.
TMF!
If the subjective experience explains the nature of reality, what would explain the information acting [that acts] upon all matter (or emergent matter as it were)? In other words, how do we reconcile informational energy acting upon all matter within our consciousness(?).
I think that would be one of the missing pieces there, as it relates to your notion that the nature of reality (consciousness) is subjective.
Otherwise TMF, I agree with your Subjective Epistemological/Ontological Problem. This is the problem associated with “subjective” versus “objective” perspectives on being in the world. Of course the way to think about this is that subjective experiential consciousness is fully "contained" within the individual. This containment results in two important sub problems, which are mirror images of each other. The first is the problem of directly knowing another’s subjective experience—the problem being it cannot be done. This is the problem of: “How do I know that you see red the way I see red?” This problem also relates to our knowledge of consciousness in other animals, which we can only know indirectly. This is also related to the philosophical problem of zombies.
Indeed, all subjective experiences can only be inferred via behavior from an objective perspective. The second issue is the inversion of this problem. This is the problem that, as individuals, we are trapped in our subjective perceptual experience of the world. That is, the only way I can know about the world is through my subjective theater of experience. — 3017amen
Thank you for these tutorials in the philosophy of science. But you might want to check your facts. — apokrisis
Of course. In the same way that all theories have to be motivated by a counterfactual framing - one which could even in principle have a yes/no answer.
So are all minds the result of a mush of complicated neurology found inside skulls? As a first step towards a natural philosophy account of consciousness, does this feel 99% certain to you.
If not, why not? Where is your evidence to the contrary? — apokrisis
Does poking this delicate mush with a sharp stick cause predictable damage to consciousness? Well ask any lobotomy patient.
And so we can continue - led by the hand - to where neuroscience has actually got to in terms of its detailed theories, and the evidence said to support them. — apokrisis
All good moral questions. How do you answer them? — apokrisis
I thought it was because we all act the same way. Roughly. Within engineering tolerances.
You might need a neuroscience degree, along with an MRI machine, to tell if a person is indeed built the same way.
You know. Verified scientific knowledge and not merely social heuristics. — apokrisis
What we perceive, feel, and think is experienced from a unique internal perspective. According to the ‘hard problem of consciousness' some of these mental states are separate to and not reducible to physical systems in the human body — Brock Harding
Regarding Quote 02 above, I answer by declaring we humans, unlike the automatons, possess a self who, described functionally, maintains a personal POV of events as reported via the senses & the cogitating mind. — ucarr
.How does our scientific process, based mainly within objectivism, render an objective profile of subjectivity? In facing The Hard Problem, have we arrived at the limit of scientific objectivism? — ucarr
If instead the semantics of scientific concepts were perspectival and grounded in the phenomenology and cognition of first-person experience, for example in the way in which each of us informally uses our common natural language, then inter-communication of the structure of scientific discoveries would be impossible, because everyone's concepts would refer only to the Lockean secondary qualities constituting their personal private experiences, which would lead to the appearance of inconsistent communication and the serious problem of inter-translation. In which case, we would have substituted the "hard problem" of consciousness" that is associated with the semantics of realism , for a hard problem of inter-personal communication that can be associated with solipsism and idealism. — sime
that's not what you said I said. You said:
Your statement implies the belief commonplace subjective experiences should be easily accessible to the objectivist methodologies of science. It also implies the subjective/objective distinction is a trivial matter and should therefore be no problem for science.
— ucarr
I didn't say or imply either of those things. — T Clark
...As far as I can see, there's no reason to think that consciousness can't be understood in terms of principles we already are aware of. I don't see any hard problem. — T Clark
...it was an insult. — T Clark
The fact you don't recognize the difference tells me everything I need to know about whether or not to take you seriously. — T Clark
have an understanding of the hard problem. — Moliere
So, whatever that is -- why my red is my red -- that's what the hard problem of consciousness is about. It's the feeliness of the world. And the thought, so my memory of what I was lead to believe at least, is that there is as yet no scientific explanation for why my red is my red (or, perhaps another way to put it, there's no scientific way to tell what my red is -- whether it is your blue or not -- yet I certainly see red) — Moliere
I read Chalmers as breaking from the Cartesian theater where the duality of a first person being separated from the rest of the movie is the explanation itself. ..The question is not whether we are only physical beings but whether the methods to establish what is only physical will explain experience. Chalmers is introducing a duality that is recognized through the exclusion of a phenomena instead of accepting the necessity for an agency beyond phenomena. — Paine
I like Zahavi’s critique of Chalmers’ position:
“Chalmers's discussion of the hard problem has identified and labeled an aspect of consciousness that cannot be ignored. However, his way of defining and distinguishing the hard problem from the easy problems seems in many ways indebted to the very reductionism that he is out to oppose. If one thinks that cognition and intentionality is basically a matter of information processing and causal co-variation that could in principle just as well go on in a mindless computer–or to use Chalmers' own favored example, in an experienceless zombie–then one is left with the impression that all that is really distinctive about consciousness is its qualitative or phenomenal aspect. But this seems to suggest that with the exception of some evanescent qualia everything about consciousness including intentionality can be explained in reductive (computational or neural) terms; and in this case, epiphenomenalism thre — Joshs
Right, and as I said if there were no experiential dimension there would be nothing else either, so putting the question as to why there is experience is really equivalent to putting the question as to why there is anything at all, or why there is something rather than nothing. — Janus
I don't see it as pessimistic at all or that anything is lost. What does a solution to the hard problem look like? I don't think there is a good one I can think of which doesn't imply some sort of dualism which I fundamentally disagree with. — Apustimelogist
I am not suggesting looking for a fundamental ontology based on computation but an explanation for why knowing about fundamental ontologies are out of reach.
I think the explanation is actually already there, it just has to be articulated and demonstrated. Like you said, experiences are primitive.
We know experiences are related to the functional architecture of our brains. We can transfer or demonstrate the concept of this kind of primitiveness into the architectures and functional repertoires of A.I. We use A.I. to demonstrate the limits of what kinds of information is transferable from the environment, what kinds of concepts are created and what information they don't or can't include, and then see what kind of metacognitive consequences this has. Does a. A.I. come up with primitive phenomenal concepts on a purely functional basis that it cannot explain, similarly to our hard problem? This is a totally plausible research program even if it may not be possible right at this moment.
Not sure what you mean here but functionally, yes we are just intelligent machines. We are just brains.
I just received my copy of Bernardo (BK) Kastrup's 2020 book, Science Ideated. He doesn't discuss the "Hard Problem" directly, but the subject matter seems to be pertinent to this thread. So, I'll mention a few first-glance quotes & comments here.In a nutshell: because correlation doesn’t explain consciousness. — Art48
While I can’t know what the subjective experience of a given something is, it seems probable that most things don’t have any. I assume you agree with this. So we’re just trying to draw the most likely line as to consciousness. You say with some assurance that AI programs already have limited consciousness. Is there any evidence for this beyond their behaviors? A purely functionalist argument can’t resolve this, since it begs the question. — J
Not quite sure why the hard problem rules out denying consciousness to computers at some future date, or why you describe the hard problem as “true.” — J
Do I think that any non-living thing can be conscious? No, I’m strongly inclined, on the evidence, to believe that consciousness is exclusively a biological property. — J
The circularity begins when you promise that “ when we describe ourselves as being conscious we're describing that non-physiological aspect of ourselves”, and when asked which non-physical aspect of ourselves we’re describing, you answer “consciousness”. — NOS4A2
I’m only arguing that if consciousness does not apply to the physiology, there is no other object to which it can apply. — NOS4A2
The reason I would say no such aspects exist is because there is no indication such aspects exist. — NOS4A2
We determine that computers-do-not-experience-subjectively in the same way we "know" that other humans do experience the world in a manner similar to our own : by rational inference from behavior. So, the Hard Problem is not about the behavioral evidence of Consciousness, but about its lack of material properties. :smile:You have to understand, if you accept the hard problem as true, you can NEVER state, "Computers do not have a subjective experience." You don't know. Can you be a computer processing AI algorithms? Nope. So if we create a machine and program that exhibits all the basic behaviors of consciousness, you have no idea if it has a subjective experience or not. — Philosophim
For my thesis, Consciousness (C) is an immaterial state of awareness, that arises from a physical process, not an entity that exists as an independent thing. I compare it to the mysterious emergence of physical Phase Transitions, such as water to ice*1. Some ancient thinkers, lacking a notion of physical energy, imagined the living & thinking & purposeful Soul, as human-like agent, or as something like the invisible breath or wind that you can feel, and can see it move matter around. Modern Materialism seems to criticize attempts to explain C, based on the assumption that the explainer is referring to a Soul, that can walk around as a ghost.1. Consciousness is able to exist despite a lack of physical capability to do so. — Philosophim
Again, in my thesis, Consciousness is defined as a process or function of physical entities. We have no knowledge of consciousness apart from material substrates. But since its activities are so different from material Physics, philosophers place it in a separate category of Meta-Physics. And religious thinkers persist in thinking of Consciousness in terms of a Cartesian Soul (res cogitans), existing in a parallel realm.2. Demonstrate a conscious entity that has no physical or energetic correlation. — Philosophim
The existence of Matter & Energy is taken for granted, due to evidence of the senses, but the origin of the material world remains a mystery : is it self-existent, or contingent? The Big Bang theory is based on physical evidence observed 14 billion years after the hypothetical event. We now grudgingly accept that our world is temporary, only because the math sputters-out at at T=0/∞. Is that more like 12am or 12pm on the clock? The evidential Gap, beyond the evidence, can be filled with speculation of Creation, or a Tower-of-Turtles hypothesis.3. If consciousness is not matter and/or energy, please demonstrate evidence of its existence without using a God of the Gaps approach. — Philosophim
Subjective substance collapsed with objective substance resolves the explanatory gap. — Enrique
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspectis experience.
I'm not sure I quite understand the distinction between first person and third person perspectives — Tom1352
In Consciousness Explained, I described a method, heterophenomenology, which was explicitly designed to be 'the neutral path leading from objective physical science and its insistence on the third-person point of view, to a method of phenomenological description that can (in principle) do justice to the most private and ineffable subjective experiences, while never abandoning the methodological principles of science.
The dualist would however need to explain why qualia warrants departing from physicalism, which is my understanding of the 'leap' involved in answering the hard problem. — Tom1352
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.
after a bit more reflection on questions like why does consciousness, this universe, or even existence "exists", I began to think that maybe it's our understanding of consciousness that makes the problem seem "hard". — Flaw
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel ('What it is like to be a Bat') has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience. — David Chalmers
Evolutionary theory can be a step in the direction of dissolving the hard problem , but only if we go beyond classical darwinism and conceive of organic processes not in terms of causal concatenations and re-arrangements of elements under external pressure but in terms of a more radical notion of reciprocal differences of forces.
How does one actually get the point across why this is not an acceptable answer as far as the hard problem is concerned? Can this be seen as answering it, or is it just inadvertently answering an easier problem? If so, how to explain how it isn't quite getting at the hard problem? — schopenhauer1
Isn't this what they call the hard problem - How does manipulating information turn into our experience of the world? The touch, taste, sight, sound, smell?
— T Clark
No. — frank
The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect. As Nagel (1974) has put it, there is something it is like to be a conscious organism. This subjective aspect is experience. When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. that unites all of these states is that there is something it is like to be in them. All of them are states of experience. — David Chalmers
Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.