Comments

  • Analytic philosophy needs affirmative action?


    I'm talking about both. Yale has an uncommonly luxurious undergraduate program, and it would take a while to know what it really involves. The unit titles & outlines can often be vague to keep content flexible for coordinators etc. The closest thing I can see to a unit devoted to taking broad shots at the analytic tradition is something called 'Critical Perspectives on the Canon'. But I do see Marx and feminism mentioned in several other units.

    IF the article is accurate and analytic philosophy has been weathered down to a monolith by an ideologically tainted history, the implication is that something needs doing about that. I suspect things are changing in the last decade, as reflected in the teaching at departments like Yale's, but it's just not fast enough for some I guess.
  • Analytic philosophy needs affirmative action?
    Does what we have now in academia approximate your proposal? When you talk about 'bad ideas', do you include evaluatively (morally, politically) bad as well as shoddy/low quality ideas? Content regulation as practiced by journals and universities is fairly liberal when it comes to evaluative regulation, but is more variable when it comes to epistemic regulation (you can troll some journals by showing how bad quality articles can get approved, e.g., the Sokal Hoax).

    But it seems like you're just talking about censorship, not affirmative action. You could have your proposal implemented alongside affirmative action, as far as I can tell.
  • Analytic philosophy needs affirmative action?
    Does Blackstone's ratio apply to the issue? Better that a 100 guilty people go free than that 1 innocent person be unjustly punished. Better that we deal with a 100 bad ideas than that 1 good idea be suppressed.Agent Smith

    I think the answer would depend on the details of the affirmative action program. My intuition is that affirmative action doesn't entail that innocent/good ideas are suppressed (censored). E.g., affirmative action through expansion (of curricula, staff, journal admissions) only. But given limited resources and demand, any real world affirmative action program would have to come at the expense of existing elements of curricula etc. Does that count as suppression or censorship? I don't think so. It would only be at the extreme limit of such a program that you would start to see something resembling censorship (e.g., requiring 10 units be provided on socialist thought in a school with funds for only 10 units would effectively suppress the teaching of other subjects). At this point however such concerns are a bit fanciful.
  • Analytic philosophy needs affirmative action?
    Okay I'll try to bring some things together here.

    The claim is that historicism question matters because its only if ahistoricism is rejected that the views claimed to be excluded within analytic philosophy can be properly incorporated. Until then, you haven't escaped the monolith that analytic philosophy is depicted as in the article.

    What I'm querying is whether this is true. Because I haven't yet been able to clearly see the cash value, the pragmatic upshot, the real, concrete effect that embracing or rejecting historicism would have on the discipline, including how it interacts with these supposedly marginalised views.

    Earlier I distinguished Platonism from ahistoricism. While logically distinct, the two thesis are connected. If you agree with the historicist conception of reason - that which concepts we tend to use depends on our time and place, etc. - then you will probably also disagree with Plato that we are always talking about or should be talking about the same things when we talk about personal identity, God, virtue, knowledge, truth, femininity, and so on.

    If making analytic philosophy historicist is also making it anti-Platonist in this sense, then when such philosophers are engaged in philosophical reasoning, they should attach an implicit qualification to their conclusions: I'm not making claims about personal identity as such, just the way it has been construed in my cultural milieu.

    Some discussions would be modified by such qualifications, but not substantially, I suspect. (Perhaps discussions involving thinkers from the past would be more sensitive to the historical/conceptual gap.) Some philosophers would lose interest in their topics if they weren't convinced that they were addressing the same stable topic as the ancients, etc. But I suspect many wouldn't.

    To some extent this is already going on. For example, the expanding field of 'conceptual engineering' within analytic philosophy isn't interested in eternally true conceptual analyses, but rather in the possibility, problems and principles that should guide change in our concepts and meanings.
  • Analytic philosophy needs affirmative action?
    If we ignore the labels your proposal seems to be that more diversity is needed in philosophy. But this is quite different than saying more diversity is needed in analytic philosophy.Fooloso4

    Fair point. I should have said 'I'm not primarily interested in labels'. What I'm driving at is that I don't yet see precisely why or how much the historical critique matters. And this is partly because I'm not convinced that a commitment to the ahistorical nature of reason is essential to analytic philosophy. Consider a thought experiment: imagine that philosophers in the analytic tradition concluded that reason is historical. How would that change what they do?
  • Analytic philosophy needs affirmative action?
    I don't think it is anti-historicist but ahistorist, It is not analytical philosophy but its domination that is historical. The assumption that truth is timeless predates analytical philosophy. But analytic philosophy is not monolithic.Fooloso4

    Sure, I agree with that... you probably misunderstood something I said. By saying 'for historical reasons' I meant that the explanation for this tendency relates to historical contingency.

    Reason as it was understood by ancient philosophy is not the same as reason based on the modern mathematical model. Reason has not yielded the kind of agreement and certainty we find in mathematics. Yes, we should use reason, but not the timeless, abstracted, apodictic, mathematical model of reasonFooloso4

    When I say we can still use reason despite this criticism, I am talking about historically embedded reason. That's why I said "We can acknowledge that there's no outside of history to judge competing positions".

    But you're being at best imprecise when you say we shouldn't use "timeless, abstracted..." reason if you don't think that's even possible in philosophy. I think you mean: we shouldn't pretend that we are using such reason.

    I still don't know why you think any of this matters.

    I want to know what ahistorical vs. historical reason means to you. When you talk about the mathematical model of reason, I suppose you're talking about using deductive proofs. If deductive proofs work in mathematics (e.g., geometry), then I don't see why they wouldn't work in philosophy.

    Insofar as I go along with the historicist critique of reason, it is because I agree that the problems we think are significant, the concepts and guiding metaphors we use, and our basic values and assumptions are strongly shaped by historical forces (note: not entirely shaped; that would be an implausibly radical social constructionism).

    The reason for disagreement has to do with the fact that the concepts embedded in deductive proofs are complex and evolving linguistic-cognitive entities that have way more play in them than notions like <triangle> which we intuitively converge upon much more tightly.

    We run into the problem of whether the work of this or that philosopher can still be considered analytical philosophy. While I think that such labels may have some use, it is limited and ultimately counterproductive. We might argue whether someone like Rorty was simply working within and expanding analytical philosophy. How useful is it to attempt to draw clear lines between analytical, pragmatist, and continental philosophy?Fooloso4

    I don't really care about labels here. What I am wondering is how, if the moderate historicist thesis I just endorsed is embraced, would debates in analytic philosophy go differently. I'm not sure anything major would change.

    Contrast this moderate historicist thesis about reason with the thesis that there are no 'Platonic forms' of the concepts we are interested in, in philosophy. I think there is some of that kind of Platonism floating around in analytic philosophy, and I can see how rejecting it might have significant impacts. But I don't see it as intrinsically tied to moderate historicism about reason.
  • Analytic philosophy needs affirmative action?
    So to be analytic about it, according to your interpretation the claims of Schuringa and the people he is implicitly defending are:
    (1) historicism is true (reason and the structure of debate are everywhere shaped by history)
    (2) analytic philosophy is anti-historicist
    (3) analytic philosophy is anti-historicist for historical reasons... to do with protecting the Anglo-American establishment by disarming genealogical critique and thus legitimising the status quo as the outcome of rational debate. (Or that would be my guess as to how the argument might be filled in).

    I'm not strongly committed to rejecting or affirming any of those claims.

    In any case, that's not what I mean by the marketplace of ideas and the liberal ideals associated with it. Although there is a tendency to go in that direction, I don't think you have to be anti-historicist to be a liberal. I mean, look at Richard Rorty. And even Rawls acknowledges that the basic values at the core of his theory are historical inheritances that he simply assumes as defining the boundaries of the "reasonable" (especially in his later work).

    Free speech is still good, basically. So is rational debate. We can acknowledge that there's no outside of history to judge competing positions, but still think that we should use the reason we have to decide which ideas to go with.

    Where does this critique leave us, do you think? What is the upshot? How different does analytic philosophy really look if we interpret these critiques correctly... if historicism is taken seriously and even embraced?
  • Analytic philosophy needs affirmative action?
    It is not as if there was a marketplace of ideas in which all are welcome to display their wares and most buyers chose analytic philosophy because they had shopped and determined that it is the best alternative. Analytic philosophy came to dominate because it was, so to speak, the only thing that was safe for sale in the marketplace.Fooloso4

    Right. But how do we deal with this fact? I tend towards liberal principles as a baseline, but think we might have to look elsewhere for how to deal with certain cases. I want to say that we should restore the marketplace when it seems to have been deformed by unfair forces, but it's hard to make sense of 'restoring the marketplace' in purely liberal terms, since liberalism doesn't deal well with history. After all, the marketplace could be completely free today, but still unfairly advantage certain players [analytic philosophers] due to historical events. In that case - which I think approximates reality - what should the liberal prescribe? Can they even recognise a contemporary violation of liberal justice that ought to be corrected in this case?

    Let's compare with affirmative action in the workplace. I think you can make a liberal case here. We can decide on which group to boost based on an assessment of unfair disadvantage. If some group is not 'in demand', that could be because they don't merit demand, or they are unfairly discriminated against even though they have merit, or have been unfairly deprived of the chance to develop certain merits. The latter two conditions may represent violations of liberal rights to equal opportunity.

    In the case of the marketplace of ideas, though, it's a bit harder. It is hard to assess what counts as unfairly disadvantaged idea (as opposed to group; it's easier to make cases about biased representation of groups in the academy). If some idea is in 'low demand' because the majority of people genuinely aren't attracted to it (don't find it reasonable or compelling), how can you say that its low demand reflects some injustice, or some irrational prejudice?

    Perhaps you can make some epistemic 'irrelevant influence' argument: were it not for X, you would not believe in P. I like this strategy, but it might lead to some fairly radical, hyper-rational conclusions that fit uneasily with an historicising approach, unless you go full Hegelian, maybe (I don't know enough about Hegel to say). It suggests we should correct for any pattern in the intellectual marketplace that is due to irrelevant factors. But what counts as an irrelevant factor? Perhaps much of history is irrelevant. Indeed, I think it's plausible that our reason is powerfully shaped by history. E.g., that our starting assumptions and intellectual toolkits are shaped by previous generations and hence the historical forces that operated on them, such as the Red Scare. If so, then it's only if you think of historical develops as following a rational pattern that you escape the conclusion that a wide swathe of theory and belief shaped by history is noise that must be 'cancelled out' by opposite intellectual frequencies.

    If you don't want to accept that conclusion, then you must find some other way to justify corrections in a free marketplace that has been shaped (i.e., allegedly biased and corrupted) by historical events.

    But note that this irrelevant influence argument isn't a liberal one. I'm not sure exactly how a liberal would justify market interventions in such a case (unless the ideas in question have clearly harmful effects that violate individual rights).
  • Analytic philosophy needs affirmative action?


    Tell me if I read you right...

    You point to two types of defects in the marketplace idea. The first is the generic problem of human stubbornness, it seems to me. I think you're right that most philosophers are pretty settled in their views, and this is truer when you're talking about broader/methodological views (e.g., Wittgensteinians rarely become analytic metaphysicians). The second relates to the structure and circumstances of the institutions within which the practice of philosophy is supported (and deformed). Bad incentives associated with corporatisation, censorship, administrative meddling, employment insecurity inhibiting experimentation and risk-taking, etc.

    You seem to be saying that while a liberal attitude may (may) be a virtue in a perfect world, the fact that the above defects are severe means that it simply entrenches existing biases.

    The liberal attitude is part of the problem. It is based on the fiction of autonomous individuals. More and more academic freedom has become a fantasy. The ivory tower is a fantasy. As Schuringa argues, analytic philosophy is not "above history and politics".Fooloso4

    I don't think it's based on that fiction. You can acknowledge the defects just described and yet think the best way forward is a liberal attitude. It's not a panacea, but it's an enabling condition for discursive progress (perhaps a necessary condition). For simplicity, I'll just identify it with a Millean free-speech view. Without such a view, one has little reason not to censor, ignore, and traduce one's opponents. Those hidebound professors may not change their views after debate, but their engagement in open debates (because they have a liberal attitude) makes such a change possible, and their support for a system that encourages debate keeps that possibility alive indefinitely.

    But if a liberal attitude isn't the panacea, what is? As I say, it could be that some non-censorship form of regulation is justifiable (e.g., affirmative action). I'd just have think about/see what that would look like in more detail...
  • Analytic philosophy needs affirmative action?
    I'm confused, Banno. Surely analytic philosophy is more like a methodology than a set of claims. How can a methodology be complete or consistent?

    That's a real (and interesting) question, by the way.

    One way of answering the completeness question: A methodology may be complete in the sense that it affords an inquirer a complete view of the object of inquiry. But no single methodology worth individuating could claim such comprehensiveness, it seems to me.
  • Analytic philosophy needs affirmative action?
    A general point: I feel like hardcore partisans of any stripe are apt to make exaggerated claims of exclusion and marginalization. Anything less than comprehensive prominence and they claim to be an underdog because there is some sphere or institution in which their preferred view or team struggles. Hence these interminable debates about whose team is really being 'oppressed' (feminists vs. men's rights advocates for example, or populists vs. liberals... everyone thinks they are under threat against enormous forces, often in unfair ways). Here the focus is Marxism. The fact that no philosophy units are dedicated exclusively to Marxist or socialist thought at a university, for example, might be taken as evidence that it is not taken seriously there. But others will say that the fact that there's at least one week devoted to it in X, Y and Z intro courses is evidence that it is omnipresent.
  • Analytic philosophy needs affirmative action?
    As long as the assumption that there is a free marketplace of ideas is not called into question a call for affirmative action will only yield strangely deformed products of rather than real alternatives to the marketplace.Fooloso4

    Yeah what does that mean though? I didn't get that. It's presented as a bad thing that feminism is regarded as a move in an ongoing debate. But surely that is the way to treat it, rather than as some privileged insight, some Truth, that comes from a transcendent realm.

    Edit: Maybe that's a little too quick. Maybe the complaint is that there isn't sufficient space to critique the terms of debate, in some sense. I don't think that's entirely right, but I don't think it's entirely crazy either. People get invested in the terms of a debate, and will resent those who want to start a new game or say their game is unimportant, silly or based on mistaken assumptions. Perhaps as a defensive move they try to integrate the criticism into the current game, rather than take it as a proposal to start a new one. Insofar as this is achieved, it loses its force and fails in its aim. If so, then the complaint is justified. But I also think the very act of proposing a new game is a 'move' that should be part of an ongoing debate, and that there is space for this kind of thing. It's just not what everyone is interested in engaging in, and so you may get a few disgruntled coughs and eye-rolls in the seminar room where the old game is being played. What really needs working out is what 'the terms of debate' and 'game' amount to.

    I think one question that must be asked is: where is the marketplace of ideas to be found? Will it remain primarily in academia or will media sources become increasingly influential? Will anti-liberal political and economic forced increasingly shape both academia and media?

    I think academia is still one place where it is found in pretty good (not perfect) shape. At least in my experience. Politics and journalism are more compromised. Social media is somewhere in the middle, I think.

    There are two big threats to the marketplace: soft social regulation/censorship and hard coercive regulation/censorship. Both exist and the liberal attitude is a bulwark against each.
  • Analytic philosophy needs affirmative action?
    Is there really something more to "analytic logic"?
    (This is not a rhetoric question. It's an actual one! :smile:)
    Alkis Piskas

    Well, according to the article it's not about a unique system of logic but a way of doing philosophy that isn't focused on social action, believes in the marketplace of ideas idea, rational choice theory, liberalism, etc. Stuff that fits with American liberal-democratic capitalism.
  • Analytic philosophy needs affirmative action?


    I don't think analytic philosophy should be the sort of thing that can be consistent or complete. It is not a thesis...
  • Analytic philosophy needs affirmative action?
    I haven't read a lot of history about what longer-term effects the '49 Red Scare had on academia. At first there was a definite liberal chill, but then..., say by 1969 or 1979, what?BC

    Schuringa's article is talking about the '49 Red Scare and the whole cultural milieu associated with it, and argues that it had long-lasting effects. He argues that it became hard to succeed if you were perceived as not being on 'team America' or hospitable to its orthodoxies.

    The immediate post-war/Cold War period was the forge in which what we have come to understand as analytic philosophy was cast, and that even once its fire cooled, its whole monolithic structure had hardened.

    How this happened is not spelled out, but it's not hard to imagine. This kind of cultural inertia is common and familiar. E.g., when all the prestigious people left have a certain viewpoint, their ideas will set the terms and content of academic debate, they will have influence on who is hired, what is taken seriously or is regarded as interesting, and so on.

    The best model for the market place of ideas is unfettered free trade. No quotas, no diversity programs, no affirmative hiring. Mao Tse-Tung said, "Let a thousand flowers bloom, a hundred schools of thought contend". Seems like a good idea for Academia, but as in China, eventually the management will have had enough odd flowers and weird schools, and the brakes will be applied.BC

    But as with free trade, late-comers, or those who have been historically set back (e.g., by war, or in this case a purge), may not be able to develop the capital to compete in the most profitable sectors without some level of protection.

    I'm on the low rungs of academic philosophy and I can say that I have encountered some level of (quasi-explicit) pressure not to be too 'radical'. E.g., eye rolls when suggesting Marx in talks over the design of a political phil curriculum. But see, it wasn't like 'Marxism is stupid and we don't need to discuss it', it was more like 'this is analytic philosophy and Marx stuff is often convoluted continental-ish and doesn't fit into the other debates we're looking at nicely'... Analytic marxists like Cohen did a good job of trying to fit it into the analytic tradition so it started to get more attention, but there's still that reservation. And I can't help seeing a bit of a cultural cringe among some in the 'old guard'.
  • True or False logic.
    One very good example of a fuzzy/vague concept is tallness/shortness. However, once we fix a particular height as a cut-off point, the vagueness/fuziness disappears.

    Two things to consider:

    1. Adapt logic to our conceptual schema: Vagueness is part of our language. Develop fuzzy logic.

    2. Adapt our conceptual schema to binary logic: Use precising definitions. Keep binary logic.
    TheMadFool

    You seem to be saying that we can choose whether our concepts are fuzzy or not.

    If we choose for them to be fuzzy, then, it seems to me, we face a further choice: we can reject LNC or we can reject bivalence (inclusive).

    If so, then whether the fundamental principles of logic are true or not is a choice.

    Is it possible for things to be both true and false at the same time or neither true or false at the same time? Or must things be either true or false at any given time?TiredThinker

    My tentative view is that it is indeed a kind of choice whether LNC is true or not. You cannot prove it.

    If I chose to reject it, don't tell me that I am entangled in a performative contradiction! All I will have said is that it is not the case that ~(~P&P). That is, I have denied the LNC itself. So I admit that it could also be true. We don't know if it's true yet. You can argue for it being true. Fine. But that doesn't exclude it's being false, on my view if I reject it. Even if I did concede that it is true, I can also maintain that it is false.

    There's a paper by Fogelin (Why Obey the Laws of Logic? (2002)) where he tries to argue that rejecting LNC deprives one of the ability to make assertions and denials. I don't think his argument works. Rejecting LNC is rejecting that it must be the case that (P&~P) is false. This is not to reject that for any proposition P, (P&~P).
  • Why not Cavell on Ethics?
    Of course, the boundaries around what counts as a good excuse are vague, but it seems like there is a spectrum between things it is simply unreasonable (according to the prevailing grammar) to count as good excuses, and things which it is reasonable to argue over in the ethical register. It depends on what our personal commitments are whether in any given case the excuse offered by someone who has broken a promise is sufficient.
  • Why not Cavell on Ethics?
    The words negotiate or coordinate make it seem like we decide, say, what an apology is, but that is of course already just a part of our lives, like choosing. We may have reasons for promising, but we individually don't conceive of what promising is (with reasons ).Antony Nickles

    Well yes, in a sense. Cavell seems to agree that there are facts about what a promise is. It is part of the grammar of 'promise', for example, that to promise is something like: to express a commitment to stick to plan unless there is a very good reason not to (i.e., some adequate excuse) or some reason why it is was not possible (a cancelling condition). It also seems to be part of the grammar of 'promise' (for Cavell) that certain things count as very good reasons and others don't (e.g., 'because I felt like it' isn't a good reason). These are facts about a practice; in any given case, an individual does not have control over this practice. If an individual thinks a promise is something else, or that personal whim is a good reason to get out of keeping a promise, they are simply "incompetent" (that's the word he uses) in the relevant practice or form of life.

    But it seems to me that it is possible to try to change the practice. One might call this inventing a new practice, depending on the degree of continuity. But in any case, there will be a kind of negotiation. I don't see how this is inconsistent with the above view. Would you agree? Does this fall under the "politics" you refer to?

    In any case, my main thing I was trying to get at was the way in which we may argue - negotiate, if you will - over things like whether one has promised, or simply expressed an intention, or something like that.
  • Why not Cavell on Ethics?
    According to this....err, rationality, because I’ve never committed an abominable moral act, which is a particular deficit in experience, I lack wisdom with respect to what my judgement should be, given the occasion for the possible commission of such an act. But if I follow a perfectly rational method for obtaining sufficient knowledge of myself, what my act on the occasion of possibly committing an abomination, should already have been determined, which immediately presupposes reason is the root of ethical wisdom.Mww

    My talk about experience was a bit of a polemical tangent. I wasn't trying to represent Cavell's own view. Even so, I think you've been uncharitable. I can say that we form better judgements about some action A the more experience we have, without making it a necessary condition that we have experience of doing action A.

    I think Antony has made some really nice points about why Cavell is not just doing moral anthropology. It's exactly the emphasis on the personal that gives it away. It's not like you 'discover what is right' by doing a survey of members of your culture.

    It through experience participating in a form of life that you begin to develop a moral sensibility, a way of seeing and sense of self (inseparably). This sensibility is ours alone. It is what we come to embrace as that which we could feel proud to take responsibility for (to 'own').
  • Why not Cavell on Ethics?
    I appreciate you taking the time, Antony. It's helpful.

    We "give and take" reasons because Cavell pictures a moral moment, an event where we are lost or conflicted within our culture so our acts carry from our aligned lives into a sort of extension to an unknown with each other.Antony Nickles

    I haven't read this part of Cavell, it seems. I took the give and take of reasons to occur when there is a conflict between any two sets of commitments. This can take place even between people in different cultures. Nobody needs to feel lost or conflicting with respect to their own culture. But perhaps I misunderstand you.

    And so we define ourselves by what we are willing to accept the implications for, what acts we take as ours, at this time, here, in response to the other, society, etc. And thus knowledge is not our only relation to the world (it is also our act). We do not 'know ' another's pain, we acknowledge it, react to it (or not).Antony Nickles

    This is a little confusing. I get the first part. But I'm not sure how the second follows. I would have said 'knowledge is thus about our relation to the world, which includes how we act with respect to it.

    So these interests and my interest most times align, but when they conflict, they do so reasonably, for reasons and from the everyday logic of each thing we do, or at least possibly, as we may fail to come together. This is the hope, and fear and dissapponment with the moral realm at all.Antony Nickles

    One thing I got from your post was a reminder about this tantalising idea of the 'logic' belonging to things - to actions, objects, persons, cultures, discourses (just about anything we have a concept for?). I take it this 'logic' is basically the same thing as 'grammar'. And then there's the negotiation of or coordination among our various grammars or logics (I would say 'interpretations' of those things, but that would be to reify that which we are negotiating, as though there is something apart from the interpretation that we are trying to get at - the 'true logic'). Considered broadly enough, this negotiation seems to be the whole of moral conflict. For example, we are negotiating (or affirming different conceptions of) the practice of promising, from our different personal commitments and reasons - when has a promise taken place, what are good excuses for failing to keep a promise, and so on. Not in the abstract, but in relation to some particular case of promising, I take it.

    Elsewhere he specifically addresses what he calls Moral Perfectionism, but it is each individual, in a sense, doing what they find their duty is to themselves, with the same sense of accepting responsibility. And the methods would be, as well, to learn the makeup of the activity (it's implications, criteria, judgments) that we are involved in.Antony Nickles

    But taken as an individual pursuit, it cannot simply be a question of aligning one's behaviour to one's authentic sensibility or some such thing, right? Surely, it is also a question of how to cultivate one's sensibility.

    Could you help me make sense of how Cavell understands this question, given that there is nothing, no ideal, to aspire to which is not independent of the individual? Is it a kind of dialectical unfolding, where we aspire to cultivate new aspirations, which lead us to go after yet newer aspirations, and so on...?

    It's tantilising but I don't feel like I can entirely make peace with it.

    The consistency is our culture, all our lives, and, when it comes down to it, in a moral moment, me, who I am to be.Antony Nickles

    This is just mysterious to me.

    Excellent reading recommendations. Thanks!
  • Why not Cavell on Ethics?
    Impressions: he seems reasonable, likable, decent.Zugzwang

    I thought I'd add some random anecdotes. A former student of his, a professor of mine, said he was as good in conversation as he was on the page (and equally elaborate). An immensely sensitive man, he could be warm, but was also apt to be 'prickly'. Apparently, never got over some rejections and hostile reactions from powerful academic colleagues in his early years. It's a wonder he remained as generous to the discipline as he was. He never took cheap shots and went to considerable lengths to speak the language of his opponents while basically repudiating their whole schtick from the ground up.... His life was one of ceaseless engagement with culture: a musician, serious academic and cinephile, he could never seem to make any time for sleep.
  • Why not Cavell on Ethics?
    In any case I would even suggest that your questions about ethics - "what are we doing? What are we aiming at?" ought to be read back into ethics as the sine qua non of ethical practice itself: that the demands that ethics makes on us are demands to grope at finding whatever partial, workable, passable solutions to just those questions. And those are questions of life and practice that cannot be closed off by any theoretical investigation that would provide any kind of ethical guidebook from on high.StreetlightX

    That's well put.

    I guess the idea of moral progress is just deeply compelling for me. I want to rank forms of life across time and space.

    Rather than claiming that this ranking transcends my particular outlook - that there is objective moral progress - perhaps this thought can be recovered within a Cavellian spirit through the notion of rationality as a kind of sensitivity.

    He writes that the "rationality [of ethics] lies in following the methods which lead to a knowledge of our own position, of where we stand; in short, to a knowledge and definition of ourselves."

    It seems to follow that we can at least 'rank' how rational people are when doing ethics, if we like. To do so is to estimate how muchwhatever it is a person is doing when they do ethics leads to self-knowledge.

    Elsewhere, he says something like “Let your experience of the object teach you how to think about it” (from memory).

    I like that. It makes me think that the methods of ethics as self-knowledge are about sensitivity. Self-knowledge sounds self-oriented. But it needn't be.

    He seems to take a phenomenological approach, in which studying the world - other people, their behaviour, what we make of it, and so on - is to study ourselves. It is both at once. On the one hand we are being genuinely sensitive to others, to the world - to the cares, predicaments and experiences of other beings. On the other, this discipline, if we are rational, will help us to know how think about and deal with those things ("teach us how to think about them"), as well as help us know ourselves (because who we are is in large part about how we think about and relate to the world).

    The idea that there are gradations of sensitivity by which we can rank people or even entire cultures is particularly vague and labile, however. But I can't help thinking there must be some correlation between this sort of sensitivity and (my) judgements of moral sophistication & progress. A correlation which might help put my worry to rest. But I'm not sure.

    But really, once you've read Cavell, most discussion of ethics - in a philosophical setting anyway - come off as unbearably stilted and artificial. It's great.StreetlightX

    Yeah I don't think there is any turning back. I have always felt uneasy doing ethics in the usual way. It's not just artificial; it also seems weirdly interfering, totalising, and archaic. Many philosophers comes across as moralists in the worst sense, i.e., judging everybody in this impersonal way using the grand language of duty, the good and the right, virtues and vices. I am also reminded of Plato, Aristotle and more recently Alasdair Macintyre, who advised that philosophy - but moral philosophy especially, I think - is best left for those with a bit more life experience. Macintyre is particularly scathing about the narrowness of the typical academic life and how it distorts their thinking when it comes to these questions. You can't shortcut a deficit in experience with the sheer power of reason, because reason is not at the root of ethical wisdom.
  • Why not Cavell on Ethics?
    Could indeed be a new way to be nice and upper middle class on the safe green front lawn, chatting with neighbors.Zugzwang

    Thanks for your reply Zugzwang. While Cavell's philosophy itself might be a 'nice' and 'optional' activity for a privileged minority of people with a certain disposition, this is probably the case for most philosophy. What is cool about Cavell's picture of ethics itself, however, is that it isn't this expert academic thing for hyper-rational philosophers. So I think your objection here is wide of the mark.

    Perhaps 'impersonal' reasons were always about appealing to the other's personal reasons.Zugzwang

    Yes I think that's often right. One could reinterpret (interpret?) applied philosophical ethics, and to some extent normative ethics (virtue ethics, deontology, consequentialism, etc.), as attempting a weird kind of systematisation of, or at least systematic inquiry into, personal reasons. Philosophers often take themselves to be leaning on intuitions, and pushing other people's intuitions around using rational inferences to derive certain conclusions. If those intuitions are thought of as emerging out of personal commitments, rather than deliverances of some special ethical sense, then the apparently impersonal normative judgements and imperatives they arrive at could also be thought to express a personal practice. It's just that this is not what they think of themselves as doing. And their methodology is, if not somehow 'illegitimate', then overly narrow. Ethics may involve this sort of philosophy, sure. But it is more than that. And the philosophy - the systematic reflection - is secondary to the real action, which is at the level of actual life, with all its contingency, flux and messiness.
  • The PUA Theory of the Origin of Language
    One necessary condition is that there be a sufficient richness of vocalizations already present for thinking to be able to do anything at all. Otherwise as you say it would just be playing sounds in the head to no effect.hypericin

    Ok I'm starting to see the idea more clearly now. So is the thought that the crucial auditory simulations that this neurological development facilitated were simulations of the organism's own vocalisations? Prior to language, these vocalisations would have been, I guess, various emotive vocalisations. I'm no anthropologist or primatologist, so I can only consult my cartoonish intuition about what these might have been like - an intuition tutored by my culture in various peculiar (non-universal) ways, no doubt. I am thinking of: gasping when surprised, yelling or roaring when angry, crying and whimpering sounds when sad, sighing when contented, laughing sounds when amused, moaning when experiencing pleasure, and so forth.

    Then is the idea that we use these resources to develop a proto-language - that these subvocalisations (auditory simulations of vocalisations) begin to increase in complexity and eventually stand as 'signs' with something like semantic content which we can manipulate into strings (sentences)? This makes it sounds like a private language. But I don't think that's what you have in mind, is it? The development of this increasingly complex subvocalisation must be informed by social and cooperative processes, as everyone is going through the same developmental process.

    The next part of your theory is that the increasing complexity eventually leading to full-blown language is fuelled by sexual selection. Maybe that's true. I don't know.

    I'd still like to hear more about what advantage the auditory connection (auditory simulation ability) conferred, if any. Is it to do with increasing the richness of our imaginative capacities, and hence improving our learning from the past and planning for the future? Of course, it may have been due to other evolutionary processes, but we all like an explanation in terms of adaptation.
  • The PUA Theory of the Origin of Language


    Okay sure so we've got the ability to play sounds just like we're hearing them. How does this lead to language acquisition? It seems like there are many possible forms of language, but that this neurological development inclined us towards a verbal form, though it hardly explains the emergence of language itself (it might be a necessary condition for a certain form of language). I'm not ruling out a deeper connection... your second post indicates that this is where you are focusing now. If I think of anything further useful to contribute I'll post.

    I enjoyed your mental exercises. I think I can do pretty well in simulating visual images, tastes and textures. I've never been good with smells.
  • The PUA Theory of the Origin of Language
    Not sure all the pieces of an explanation are present here... What does the "connection from the brain's central processing *back* to the auditory processing pathway" do exactly? It's hard to imagine what thoughts sound like before language, insofar as those thoughts are not already auditory contents. And what is it's relation to the eventual emerge of language? A necessary condition only? Or was it sufficient?
  • Intensionalism vs Consequentialism


    Can't the utilitarian say that the right action is that with the highest expected utility from the actor's point of view?

    In your example of saving the drowning child, the actual negative outcomes of the action do not constitute its expected negative utility. Hence, it is not wrong. Indeed, the action is right: the actor has reason to think that the expected outcome of her action will be positive.

    Small note: 'intensionalism' is standardly understood to be a view in philosophy of language relating to intensions (there is more to meaning that reference or extension). You mean intentionalism.
  • Leftist forum
    Philosophy is stressful in that you are constantly challenged.

    Philosophising about politics is even more stressful in that you are constantly challenged about stuff which is (increasingly) close to your heart and identity. The nature of the challenge is also likely to be more heated. I haven't followed your interactions here, but I wouldn't be surprised if there was a little less dispassionate analysis and a little more dismissiveness coming from those who disagree with you. At least, that's been my experience talking about politics. I have conflicting feelings about what we should think about that. On the one hand, it is perfectly understandable given the essentially practical nature of the inquiry - ultimately, our political philosophy must be oriented towards getting people to do stuff we think is just (I think ethics and politics are fundamentally practical disciplines). On the other, I think we should strive to be as dispassionate as possible in the context of philosophising.
  • What is "gender"?


    In orthodox terminology, it looks like (1) is sex and (2) and (3) are different ways of thinking about gender. (2) is an internalist account. (3) is an externalist account.

    My trouble with (2) the internalist account is that I struggle to get a grip on its content, independent of (3). I have been reading Wittgenstein's PI (half way through) so I'm thinking in terms of how words get their meanings. If Wittgenstein is right, the meaning of a word like 'woman' can't be some private experience or state.

    Suppose someone says that they are a woman. Yet in their appearance and behaviour, they display the characteristics of a man. That is, their appearance and behaviour attract our application of the word 'man’, given how we have been trained to use that word. What do we do, now? If we insist that meanings must consists in something like patterns of public usage, we cannot say that the meaning of ‘woman’ is an inner feeling of some kind.

    I suspect that the best response here is to say that there is public stuff which can form the basis of a sensical language game for gender terms like ‘woman’. Someone who sincerely says that they are man will have characteristic features that we can track in some sense.

    Perhaps a very minimal meaning might be: being attracted to and identifying with things tagged masculine (here we see the appeal to (3): (2) only makes sense in relation to (3)). Even if they don’t act that way, this may be because of repression and training to conform to their ‘assigned gender’. Perhaps over time they would begin to act and look more that way. But even if they did not, we might say that a man may simply be someone who, regardless of appearance, behaves their affinity with manly things.

    Note that because there is no essence to 'woman' as a gender term, we are 'allowed' to imaginatively extend its application in ways that might violate certain features we thought (wrongly, because there are no essences) were essential. And there can be quite a lot of variation among these ways. The point is that whatever the meaning of 'woman' is, it must be found in (evolving, messy) public language games. Publicly invisible self-identification is insufficient. If someone said they were a woman, but we could find nothing in their manifest behaviour or appearance to indicate that we should extend the term 'woman' to them, we would be tempted to say that they were making a mistake - that they had failed to understand what 'woman' meant.
  • Is anxiety at the centre of agricultural society?
    I think it would be fair to say that with the development of agriculture in exchange for security people lost their real freedom. Whether or not they knew that I don’t know. But those imposition on their freedom became more and more severe. In that sense I can see how it could be argued that agriculture and its consequences created “anxious man”.Brett

    Yeah I think it did increase our stress levels. Inequality does that. Especially for those lower in the hierarchy. If we use socio-economic status as a proxy for place in the hierarchy, there's a tonne of research on how it affects our health, apparently mediated by stress. And the relation is actually stronger when we consider subjective as opposed to objective SES background.

    Interestingly, it appears that inequality also correlates with negative health outcomes for those at the top as well as those at the bottom. A nice argument for the holistic perspective: the individual flourishes only when the whole flourishes.

    But was it an acceptable trade off? Obviously if it was not we would not be talking about it. And if not agriculture what other innovation might have entered our world?

    It was and is acceptable in the sense that the alterative is death for all those individuals in excess of 10 million - the estimated maximum number of human beings the Earth can support as hunter gatherers.
  • Is anxiety at the centre of agricultural society?
    Yeah, that's the real crazy question, why did they?

    Agriculture-based states existed only in very specific environmental conditions; conditions that minimized how much work was needed for agriculture to work, and conditions that offered no other obvious alternative. Ancient Mesopotamian city-states were dependent on the flooding of the Tigris and the Euphrates to do a lot of the hard work for them (but certainly not all of it); it would be unimaginable to see a city-state in a different environment, like the mountains.

    But even still, ancient Mesopotamia was not a desert, and there were plenty of other alternatives to agriculture nearby to the rivers at the time (unlike how the region is today, which is an arid desert). Many people were able to live outside of and independent of the states, and many tried to escape as well. If agriculture-based societies were an obvious benefit to anyone, why were the majority of humans living outside of them for the majority of human history, and why were so many people trying to escape?
    darthbarracuda

    From a little bit of research, it seems that the 'why then?' question has to do with the ending of the last ice age and population growth. A warmer climate made agriculture possible (or significantly more lucrative), and the fact that humans had multiplied and spread over the entire habitable world by around 10,000 BC made agriculture (more or less) necessary for regions where over-exploitation occurred. At this point, the options for desperate H/Gs were then either (1) increase productivity of the region presently held, which was now much more feasible due to the warmer climate, or (2) displace (run off or exterminate) neighbours and take their land. Seems like (1) would have been preferable.

    Some point out that there was an inter-ice-age period before 12,000BC, so why didn't agriculture emerge then? It seems to me like it could have been that it was too early because the human population hadn't reached planetary saturation, and there might have been some necessary biological-cognitive and/or cultural development that was yet to occur to facilitate the invention of fullscale agriculture, but I don't know.

    As for why people tried to run off, I recently read an article by a libertarian economist (Rubin, Hierarchy, 2000) which has given me some ideas. He distinguishes dominance hierarchies from productive hierarchies, and argues that the latter really emerge only in sedentary, agricultural society. Although, he never properly defines the term, I take it that productive hierarchies are those in which superiors determine the allocation of work. Dominance hierarchies, in contrast, are those in which superiors determine the allocation of goods (e.g., food, sex, territory).

    When productive hierarchies are appropriate, they increase total productivity (e.g., by reducing conflict over who does what). The reason PHs only get going in agricultural societies is that they are only useful when you have a significant degree of division of labour and specialisation. Rubin seems to assume that PHs prevailed and were more productive than all alternatives in early agricultural civilisation, with surplus output benefitting all members of hierarchy, albeit unequally. So it would be rational from an economic point of view for H/Gs to join such hierarchies, even if they ended up being on the bottom.

    He also notes, however, that there are fitness considerations that go into the rational calculations of would-be joiners: does joining increase or decrease my reproductive potential? In early agricultural societies, polygyny was common, and dominants in the PH captured most of the women. So that would be a major disincentive to joining.

    In 'non-rational' terms, it would have been an enormous cultural shift for H/Gs, who were used to egalitarian structures. Also, Rubin suggests that because of the fitness-reducing effect of dominance hierarchies for non-dominant humans in our evolutionarily formative period, we probably have a hardwired aversion to dominance hierarchies. So no matter what is rational in a given situation, we might be biased against hierarchies of any form.

    This helps explain why people tried to run off. But it doesn't explain why enough people opted in to support the expansion and spread of agriculture before socially enforced monogamy. It doesn't seem like a great mystery to me though. Surely there was variation in the payoffs, and it just happened that in enough regions opting in was sufficiently attractive to get a good seed population, and once the whole thing got going, you get the lock-in effects... specialisation and domestication of humans makes exit unviable, and expansion of those early successful agricultural empires would further displace surround H/Gs and extinguish their knowledge... and because agricultural societies are "population machines" which capture outsiders by overwhelming force and facilitate surpluses to fuel fertility, they would have simply out-populated competing forms of life.
  • Is anxiety at the centre of agricultural society?
    Contrary to your point about walking away, though, the historical evidence we have actually shows that people running away from early states was a serious problem for these states. Malnourishment, epidemics, heavy taxes, slavery, wars, back-breaking and onerous labor, inequality, hierarchies, all of this stuff is what you find in states. People had no good reason to stay and so they frequently took flight. The prestige of a state was reflected not so much in how much land it had but in how big their population was. Cities built walls not just to keep enemies out, but to keep the residents in.

    And if you are raised in a state, you are basically domesticated and so you don't really know how else to live outside of the conditions of the state. If all you know is farming, then even if there are plenty of other resources available from different methods, you are out of luck.
    darthbarracuda

    Great point.

    It sounds like you think agriculture was a bad deal for the oppressed majority, but a good deal for the oppressive minority.

    But it must have been a good deal for the majority to begin with at least, right? Otherwise the whole thing wouldn't have gotten off the ground.

    So perhaps the story is something like: scarcity causes cooperative agriculturalisation (motivated by well-grounded anxiety about survival without it). But at some point, given the low and declining feasibility of exit, and the division of labour, dominance hierarchies take hold. This is self-reinforcing: political-material inequality locks in because the growing surplus is unevenly allocated according to the hierarchy, and this enables further coercion, which further increases inequality, etc.

    Does that sound right?
  • Is anxiety at the centre of agricultural society?
    Right, this exactly. Humans are adapted to an environment that doesn't exist anymore: the savanna of eastern Africa as it was during the Pleistocene for hundreds of thousands of years. Around 12,000 years ago that environment disappeared, and by all normal rights humans would have disappeared with it, except that we had the unique cognitive ability to figure out how to adapt our memes -- in the Dawkins sense of units of learned behavior -- instead of just adapting our genes. And now we are masters of pretty much every environment on the planet, most of which were are terribly adapted for on a genetic level, but we make up for it on the memetic level. That memetic adaptation being: think about all the ways that things could go wrong, and act to minimize them, even if everything is fine right now.Pfhorrest

    Nice.

    ↪darthbarracuda I think it's more that an agricultural lifestyle enabled state coercion than that state coercion created the agricultural lifestyle. If you didn't need agriculture, if you could just walk away from an abusive society and live comfortably in the wilderness with no loss to yourself, the state would be powerless over you. The state has to have something you need, and that's control of the capital you require to make a living. A true post-scarcity world would dissolve the impetus for capital and state alike, and likewise, a pre-scarcity world (like the Pleistocene environment we're adapted to) would have no impetus for them either. It's only when times get hard and people have to band together and figure out how to make the most out of scarce resources or else die that the strong men who can horde those resources to themselves unless you do what they say have any power.Pfhorrest

    I think I agree I just want to clarify something.

    Let's accept the hypothesis suggested by darthbarracuda that it was resource scarcity that caused humans to switch to the agricultural mode of production.

    Now I think more accurately we can say that it is scarcity plus agriculture that enables state coercion.

    So scarcity triggers agriculture out of necessity (as the alternative to starvation). Now, participation in agricultural society - with production and allocation controlled hierarchically - represents a relative improvement in prospects for survival for individuals. This changes the incentive structure, since exit from the community is now more costly (potentially fatal). So far, there is no need for coercion. Everyone can rationally accept the burdens of this new way of life.

    But as darthbarracuda says, there was (and is) coercion in post-agricultural society. Coercion involves force; when you coerce someone, you force them to do something against their will. So I suppose we must say that either (1) those who are coerced have an irrational will (because exit has lower expected utility), or (2) at some point, the deal for those at the bottom of the hierarchy got significantly worse.

    Regarding (2), this is most likely the result of enslavement. Once you have an agricultural society, it becomes rational (?) for those with power to capture, enslave and coerce those weaker than them to do the work at the 'bottom' of the division of labour (which did not exist much before, except along gender lines). This is because there is simply a lot more work to be done (simple hunter-gatherer societies could get by on around 15 hours a week), and now there is surplus to be collected.

    This suggests a further question, however. Why is it rational for those at the top to coerce those at the bottom? I think it is because this aforementioned surplus is something people now wish to acquire in greater and greater amounts. So it seems like the possibility of acquiring surplus triggered something like human greed... as though greed were a latent psychological inclination among humans that was waiting for the right conditions. I would suggest that such greed is closely related to risk aversion and anxiety. The background conditions of scarcity which compound such anxiety are then also the background conditions of greed - the perception of scarcity, whether or not it remains real, might then lie behind persisting greed and the coercion it inspires.
  • Is anxiety at the centre of agricultural society?
    Arguably, you’re talking about The Fall.Wayfarer

    I think that that kind of anxiety about the possibility of failure, the realization that everything won't necessarily be all right but could go horribly horribly wrong if we're not careful, is at the root of all philosophizing, in the broad sense of the quest for wisdom, where wisdom is the ability to discern true from false, good from bad, etc. And furthermore, that the realization of the need for such wisdom is the loss of "innocence" in the religious sense of the word.Pfhorrest

    That's an interesting connection you both make to the idea of the Fall. I'll say something about it in a moment.


    So to react to your original post, early agricultural states were not based on an anxiety about the future and risk-aversion. Complete dependence on agriculture increased the risk of starvation. There was no good reason for anyone working the fields to be doing that, apart from coercion by the state.darthbarracuda

    There seems to be a tension in what you say. On the one hand, you affirm the hypothesis that agriculture is spawned in moments of desperation. On the other, that it increased desperation.

    But I take the point that resource constraints were a big factor in motivating the agricultural way of life. This is actually consistent with the thought that it was about risk-aversion, but adds that such anxiety was well-grounded.

    Which takes me to the Fall. Other creatures in similar circumstances presumably would have moved on from the area and/or endured forced population reductions. Humans, with an extended range of foresight and ingenuity, decided to fight nature, but thereby increased their burdens, their anxiety, their suffering.
  • Are there any philosophical arguments against self-harm?
    Actually, definitely a violation of perfect duty to self.tim wood

    Yeah that's another arguable reading. Using the universalisation maxim of the categorical imperative, you could argue that self-harm fails the contradiction in conception test in just the same way that suicide does: the principle that sustains life is being used to oppose it. In neither case do I find the argument compelling, however. Do I necessarily kill myself out of self-love? Do I necessarily harm myself out of self-love or something similar? If not, then it may be perfectly coherent to conceive of a world in which everyone follows a maxim of self-harm.
  • Are there any philosophical arguments against self-harm?
    Kantian - arguably a violation of one's imperfect duty to develop one's talents (or something similar)

    Utilitarian - taking actions which fail to generate the most utility. Depending on the case, I imagine that a longer, healthier lifespan would facilitate doing more good.

    But I am doubtful of that there are 'categorical imperatives' of either kind. Only perfectly ordinary hypothetical imperatives of the form 'if you value X, you ought to do Y'.
  • How much do questions assume?
    Then your statement has the form of a question, but not the substance. You have to decide what a question is.tim wood

    I repeat: You're just reasserting your claim, not defending the existence of presuppositions in your example of a question.

    But I'll bite. How's this? A question presents a certain set of possibilities and asks its audience which is actual. In presenting those possibilities, it is not affirming that any of those possibilities are true. That is, it is not presupposing any answer. It is not itself a proposition. It is not an assertion. That is a different sort of speech act.

    The OP suggested that questions do contain propositions/assertions/'presuppositions'. I still haven't been convinced that they do. However, I can see how in order for the questioner to ask her question, she might have to assume certain things to be true (i.e., subscribe to certain propositions). But this seems different to the content of the question containing assumptions.
  • How much do questions assume?
    The asking of the question presupposes that some answer will satisfy the inquiry. But it does even more than that. It not only assumes that an appropriate response, such as "Trees exist", can be articulated by the interrogated, but that it sets itself apart from some other meaningful response, such as "Trees don't exist."Adam's Off Ox

    I think that's true, although it isn't obvious. You could argue that you don't need to presuppose that some answer will satisfy the inquiry. Maybe there are questions without answers.

    That seems wrong, though. I would argue that the content of a question is its set of possible answers. Therefore if the questioner knows the content of their question, they know what its possible answers are (and that they are distinct). They are seeking to discover which among those possible answers is true.

    But this is trivial.

    If I say "some apples are red", and I know what I mean when I say that (i.e., I know English), there is a trivial sense in which I assume that those words have those meanings. It seems the same with what you are saying regarding questions. It is required to reasonably ask a question (to understand it) that I assume that it has a certain meaningful content, and that this involves various possible answers.
  • How much do questions assume?
    The idea is that if you ask, you're presupposing. If you're not presupposing, then it's a statement in the form of question, but not really a question, therefore nonsense. And not really a question because you're not really presupposing anything; i.e., the question is not about anything.tim wood

    Yes. You presented an example of a putatively genuine question and suggested that, as such, it contained presuppositions. I argued that it didn't contain any such presuppositions, and was therefore in fact a counterexample to your claim. You're just reasserting your claim, not defending the existence of presuppositions in the example.
  • Black Lives Matter-What does it mean and why do so many people continue to have a problem with it?


    Is it willful ignorance or an individual attempt to merely not acknowledge the issues of people in the black community?Anaxagoras

    I think there's a bunch of stuff going on. We find contrarians in the IDW looking for an opportunity to oppose 'woke dogmas' and get martyred by the inevitable backlash they'll get online. We find conservatives who associate BLM with Marxism and generally oppose any large-scale institutional and cultural reform. Then there's ignorance and self-absorption, which can be explained in terms of a culpable failure to develop a loving gaze (as Murdoch might have put it) as well as a relatively innocent lack of exposure to relevant statistics, history and experience.

    Take Eric Weinstein: https://www.youtube.com/watch?v=ETcq7qqPhow . He seems well-meaning in some ways, but I find his reaction perplexing. He seems to think that the BLM movement is a superficial symptom of some broader, more universal psycho-social disease, and that this 'chaos' could have erupted at any number of weak points in our flagging system.

    Even if it is true that 'the conditions were ripe for some kind of explosion', that doesn't reduce the legitimacy of the concerns being voiced, limit their urgency, or imply that those concerns don't have unique motivations (i.e., imply that there aren't race-specific issues). What does he think should happen? That we campaign for something universal but hopelessly nebulous? Or don't campaign for anything at all, and perhaps simply work on our own individual spiritual condition?

    Maybe I'm not being fair to his analysis, I haven't gone into it. I was just recently exposed to his rants and they've been floating around my head. I'm not quite sure what to make of him, to be honest. He doesn't seem disingenous, but if you zoom out, it's hard for me not to see his reaction as insufficiently sensitive to the injustices here.