• Bret Bernhoft
    218
    There is much talk afoot of science being to blame for today's woes. But it is important to remember that science and technology are simply tools. Instead, focus on a user's intent.

    Even further, many bodies (some with great power) aim to "put the genie back into the bottle" regarding what science has brought out into the world. While the other side of this spectrum argues that science is our only hope. And that through science and technology, humanity will free itself from extinction.

    What are your thoughts about this issue?
  • Tom Storm
    8.4k
    There is much talk afoot of science being to blame for today's woes.Bret Bernhoft

    That's not something I have heard to be honest and it is unclear what you mean by 'today's woes' - many of which seem cultural and political, not scientific as such.

    It's pretty obvious that we need some technological solutions to problems created by technology (pollution for instance). But I suspect it is capitalism and the market economy that is responsible for many ills, not just those brought on by science, but also those brought on by manufacturing, marketing and media.

    But really, the question is what do you consider to count as science? Do you include cars, medicine, computers, clothing, airplanes, mobile phones, x-rays...? Most things that human's do and build have a shadow side, whether it be damming a river, or putting through a highway.
  • Agent Smith
    9.5k
    Indeed. From a book: The nexus between royalty, science, military, and traders (the Imperialist Quartet) is an old story. Go against science and you might land up getting hanged, drawn, and quartered (for treason).
  • Miller
    158
    Therefore to resist the march of scienceBret Bernhoft

    Technology is slowly evolving through us

    we cant make it go faster or slower
  • EnPassant
    665
    The problem is not science, it is the abuse of scientific knowledge, among other things. But scientism probably exacerbates anti science because it blurs the distinction between science and non science.
  • SophistiCat
    2.2k
    Well, both positions, as stated, are stupid caricatures. Or the first one is a caricature; the second is just stupid.
  • Agent Smith
    9.5k
    Antiscience: The Tenth Man Rule!

    What you believe is light is actually darkness!
  • Pantagruel
    3.3k
    It isn't so much that science is to blame for today's woes as that people try to use science in lieu of traditional normative institutions; for which it is, unfortunately, a poor substitute.
  • Bylaw
    547
    I think this is true and I would add that there is a conflation between science, technology and all the processes that lead to what technology is developed and what is not. Third, I would say that a naivte about how much industry controls the results of its research AND the regulatory bodies that are supposed to monitor industry is also causing essential problems.
  • Raymond
    815
    The problem is not science, it is the abuse of scientific knowledge, among other things. But scientism probably exacerbates anti science because it blurs the distinction between science and non science.EnPassant

    "Scientism is the view that science is the best or only objective means by which society should determine normative and epistemological values. While the term was originally defined to mean "methods and attitudes typical of or attributed to the natural scientist", some religious scholars (and subsequently many others) adopted it as a pejorative with the meaning "an exaggerated trust in the efficacy of the methods of natural science applied to all areas of investigation (as in philos"

    What's the difference with what our scientists think? Don't most scientists think so too? Isn't this even put in practice in modern society?
  • Raymond
    815
    But really, the question is what do you consider to count as science?Tom Storm

    Why is this the question? The question is if it's a bad thing to be against it. If you are a scientist then the answer seems obvious: yes. You would be guilty of treason if you said "no".
    The more important question is: Should it constitutionally and institutionally be made a measure for all of us?
  • EnPassant
    665
    Scientism is the view that science is the best or only objective means by which society should determine normative and epistemological values.Raymond

    I think in modern times the line between hypothesis and scientific knowledge is being blurred. The theory of evolution is often presented as done and dusted with only some loose ends to be tied up. The reality is, there are many gaping holes in it. (The theory is right in some aspects but it is far less complete than a lot of science writers pretend.) Another problem is the gene-of-the-gaps theory: all kinds of things are routinely explained away with vague references to genes. As a result people are wont to say things like "Cricket goes way back in our family, it is in our genes.". Likewise with the 'alcoholic gene'. That was very popular some years back but has gone out of fashion now. The line is being blurred between science and scientism and this is not a good development.
  • Raymond
    815


    Yeah, I think you are right here. Scientific knowledge is often presented in better shape than it actually is, especially complicated stuff like you refer to. Evolution dealt with organisms once. After DNA was isolated and examined, the story of evolution was projected onto genes. Leading to a picture of a battle between the strands (of DNA), to which organisms are attached like puppets on a string. I once saw an image like, an artist's impression, accompanying an article on evolution; and I wondered if the artist took this view seriously or if he was criticizing it. DNA even got turned into a selfish macro molecule. And you already mentioned cricket and alcoholism. Features like intelligence, nastiness, criminality, love, you name it, are projected on it without a further thought given.
    The truth is though that this just can't be done. There simply are no selfish and dumb genes with a criminal attitude.
    It are the streamlined versions of theories that reach the public. Maybe this gives rise to scientism whose proponents can torment and torture the ideas even further to fit their scheme, I don't know. I'm not a "scientismist(ator?)".
  • RolandTyme
    53
    Treason?! That's what I would commit against her majesty and her government, here in the UK.

    I wouldn't normally do this, but as this is a philosophy forum, I think this kind of pedantry is acceptable - treason isn't the correct word here, and you're verging into rhetoric, not philosophy, by it's use.
  • Bret Bernhoft
    218
    I don't think being anti-science is treasonous, but I do think that it is the incorrect path for our species.
  • Bret Bernhoft
    218
    I think that everyone benefits from using technology.

    The image below is a word cloud featuring the three thousand most common two word phrases within recent text posts from roughly three dozen religious subreddits. While I see cries for help, I also see evidence of people's strength. And I wouldn't have had this perspective without technology.

    Which is another reason why I'm a techno-optimist.

    top-3000-two-word-phrases.png

    As "evidence" of my techno-optimism, here is a word cloud featuring the three thousand most popular two word phrases present in my comments on this forum.

    two-word-phrases-in-comments.png
  • Joshs
    5.3k


    Which is another reason why I'm a techno-optimistBret Bernhoft

    Are you supportive of Mark Andreesen’s techno-optimist manifesto?

    https://a16z.com/the-techno-optimist-manifesto/

    Or do you agree with this critique of Andreesen?

    https://www.nytimes.com/2023/10/28/opinion/marc-andreessen-manifesto-techno-optimism.html
  • Bret Bernhoft
    218


    I do agree with Mark Andreesen's "The Techno-Optimist Manifesto", in that at least he's throwing his clout behind embodying a daydreamer. We need those kinds individuals right now.

    But I was unable to review the critique, as I do not have a NYT subscription. And there is a paywall in front of the article.
  • Joshs
    5.3k


    But I was unable to review the critique, as I do not have a NYT subscription. And there is a paywall in front of the article.Bret Bernhoft

    A Tech Overlord’s Horrifying, Silly Vision for Who Should Rule the World:

    It takes a certain kind of person to write grandiose manifestoes for public consumption, unafflicted by self-doubt or denuded of self-interest. The latest example is Marc Andreessen, a co-founder of the top-tier venture capital firm Andreessen Horowitz and best known, to those of us who came of age before TikTok, as a co-founder of the pioneering internet browser Netscape. In “The Techno-Optimist Manifesto,” a recent 5,000-plus-word post on the Andreessen Horowitz website, Mr. Andreessen outlines a vision of technologists as the authors of a future in which the “techno-capital machine” produces everything that is good in the world.

    In this vision, wealthy technologists are not just leaders of their business but keepers of the social order, unencumbered by what Mr. Andreessen labels “enemies”: social responsibility, trust and safety, tech ethics, to name a few. As for the rest of us — the unwashed masses, people who have either “unskilled” jobs or useless liberal arts degrees or both — we exist mostly as automatons whose entire value is measured in productivity.

    The vision has attracted a good deal of controversy. But the real problem with Mr. Andreessen’s manifesto may be not that it’s too outlandish, but that it’s too on-the-nose. Because in a very real and consequential sense, this view is already enshrined in our culture. Major tent-poles of public policy support it. You can see it in the work requirements associated with public assistance, which imply that people’s primary value is their labor and that refusal or inability to contribute is fundamentally antisocial. You can see it in the way we valorize the C.E.O.s of “unicorn” companies who have expanded their wealth far beyond what could possibly be justified by their individual contributions. And the way we regard that wealth as a product of good decision-making and righteous hard work, no matter how many billions of dollars of investors’ money they may have vaporized, how many other people contributed to their success or how much government money subsidized it. In the case of ordinary individuals, however, debt is regarded as not just a financial failure but a moral one. (If you are successful and have paid your student loans off, taking them out in the first place was a good decision. If you haven’t and can’t, you were irresponsible and the government should not enable your freeloading.)

    Would-be corporate monarchs, having consolidated power even beyond their vast riches, have already persuaded much of the rest of the population to more or less go along with it.


    As a piece of writing, the rambling and often contradictory manifesto has the pathos of the Unabomber manifesto but lacks the ideological coherency. It rails against centralized systems of government (communism in particular, though it’s unclear where Mr. Andreessen may have ever encountered communism in his decades of living and working in Silicon Valley) while advocating that technologists do the central planning and govern the future of humanity. Its very first line is “We are being lied to,” followed by a litany of grievances, but further on it expresses disdain for “victim mentality.”

    It would be easy to dismiss this kind of thing as just Mr. Andreessen’s predictable self-interest, but it’s more than that. He articulates (albeit in a refrigerator magnet poetry kind of way) a strain of nihilism that has gained traction among tech elites, and reveals much of how they think about their few remaining responsibilities to society.

    Neoreactionary thought contends that the world would operate much better in the hands of a few tech-savvy elites in a quasi-feudal system. Mr. Andreessen, through this lens, believes that advancing technology is the most virtuous thing one can do. This strain of thinking is disdainful of democracy and opposes institutions (a free press, for example) that bolster it. It despises egalitarianism and views oppression of marginalized groups as a problem of their own making. It argues for an extreme acceleration of technological advancement regardless of consequences, in a way that makes “move fast and break things” seem modest.

    If this all sounds creepy and far-right in nature, it is. Mr. Andreessen claims to be against authoritarianism, but really, it’s a matter of choosing the authoritarian — and the neoreactionary authoritarian of choice is a C.E.O. who operates as king. (One high-profile neoreactionary, Curtis Yarvin, nominated Steve Jobs to rule California.)

    There’s probably a German word to describe the unique combination of horrifying and silly that this vision evokes, but it is taken seriously by people who imagine themselves potential Chief Executive Authoritarians, or at the very least proxies. This includes another Silicon Valley billionaire, Peter Thiel, who has funded some of Mr. Yarvin’s work and once wrote that he believed democracy and freedom were incompatible.
    It’s easy enough to see how this vision might appeal to people like Mr. Andreessen and Mr. Thiel. But how did they sell so many other people on it? By pretending that for all their wealth and influence, they are not the real elites.

    When Mr. Andreessen says “we” are being lied to, he includes himself, and when he names the liars, they’re those in “the ivory tower, the know-it-all credentialed expert worldview,” who are “disconnected from the real world, delusional, unelected, and unaccountable — playing God with everyone else’s lives, with total insulation from the consequences.”

    His depiction of academics of course sounds a lot like — well, like tech overlords, who are often insulated from the real-world consequences of their inventions, including but not limited to promoting disinformation, facilitating fraud and enabling genocidal regimes.

    It’s an old trick and a good one. When Donald Trump, an Ivy-educated New York billionaire, positions himself against American elites, with their fancy educations and coastal palaces, his supporters overlook the fact that he embodies what he claims to oppose. “We are told that technology takes our jobs,” Mr. Andreessen writes, “reduces our wages, increases inequality, threatens our health, ruins the environment, degrades our society, corrupts our children, impairs our humanity, threatens our future, and is ever on the verge of ruining everything.” Who is doing the telling here, and who is being told? It’s not technology (a term so broad it encompasses almost everything) that’s reducing wages and increasing inequality — it’s the ultrawealthy, people like Mr. Andrees.

    It’s important not to be fooled by this deflection, or what Elon Musk does when he posts childish memes to X to demonstrate that he’s railing against the establishment he in fact belongs to. The argument for total acceleration of technological development is not about optimism, except in the sense that the Andreessens and Thiels and Musks are certain that they will succeed. It’s pessimism about democracy — and ultimately, humanity.
    In a darker, perhaps sadder sense, the neoreactionary project suggests that the billionaire classes of Silicon Valley are frustrated that they cannot just accelerate their way into the future, one in which they can become human/technological hybrids and live forever in a colony on Mars. In pursuit of this accelerated post-Singularity future, any harm they’ve done to the planet or to other people is necessary collateral damage. It’s the delusion of people who’ve been able to buy their way out of everything uncomfortable, inconvenient or painful, and don’t accept the fact that they cannot buy their way out of death.
  • Bret Bernhoft
    218
    I do agree with many parts of the text you have shared. There are some important comparisons and points made inside the article.
  • javra
    2.4k


    [...] In pursuit of this accelerated post-Singularity future, any harm they’ve done to the planet or to other people is necessary collateral damage. It’s the delusion of people who’ve been able to buy their way out of everything uncomfortable, inconvenient or painful, and don’t accept the fact that they cannot buy their way out of death.Joshs

    Thanks for that! :up: The whole article reverberates quite well with me.

    As what I find to be a somewhat humorous apropos to what's here quoted:

    One’s death - irrespective of what one assumes one’s corporeal death to this world to imply ontologically - can be ultimately understood to be the “obliteration of one’s ego” (whether one then no longer is or else continues being only being a possible appended issue). Taxes on the other hand - something that many, especially those who are rich, are also morbidly averse to - are when symbolically addressed “one’s contribution to the welfare of a community/ecosystem/whole without which the community/ecosystem/whole would crumble” (be it a tyranny, a democracy, or any other politics when it comes to the monetary contribution of taxes per se).

    At least when thus abstractly understood, I can jive with Franklin in that death and taxes, as much as they might be disliked, are sooner or later both certainties for individuals partaking of a societal life and, hence, for humans - transhumanist of otherwise.

    Not an argument I'm gonna defend. Just an opinionated observation to be taken with a grain of salt.
  • Count Timothy von Icarus
    2k


    I think it's interesting that you've decided to frame this in terms of productivity. What exactly are we producing and why is it worthwhile to produce?

    I generally agree that advances in technology are important for human flourishing. In an important way, technology enhances our causal powers, makes us "free to do," things we could not before. So to, the attainment of knowledge presupposes a sort of transcendence, a move to go beyond current beliefs and desires into that which lies beyond us.

    In The Republic, Plato leaves most of society in the cave precisely because ancient society required that most people spend most of their time laboring to produce the prerequisites for life. Technology at least opens up the possibility of more people being free to ascend out of the cave.

    However, we can consider if the futurist vision of A Brave New World is a utopia or a dystopia. It seems that by technocratic standards that focus solely on consumption, production, and the ability of the individual to "do what they desire," it must be the former.

    Most people in that world are happy. They are free to fulfill their desires. They are bred for their roles, their cognitive abilities intentionally impaired if they are to be menial laborers. There is ubiquitous soma, a drug producing pleasure, mass media entertainment, always enough to eat and drink, and organized sexual release. When there are those who don't fit into this scheme, those with the souls of scientists and artists, they get secreted off to their own version of Gault's Gulch where all the creative and intellectual people can pursue their own ends, consuming as much as they want.

    What exactly is lost here? The world of A Brave New World seems like a technocratic paradise. Here is the vision of unity found in Plato's Republic and Hegel's Philosophy of Right, but it ends up coming out looking all wrong, malignant.

    What seems wrong to me is the total tyranny of the universal over the particular. People are made content, but not free; they consume, but don't flourish. There is an important distinction to be made between freedom to fulfill desire and freedom over desire. Self-determination requires knowing why one acts, what moves one. It is a relative mastery over desire, instinct, and circumstance, not merely the ability to sate desire.

    We never get a good view of the leaders of A Brave New World. We must assume they are ascended philosopher kings since they don't seem to abuse their power, and work unerringly towards the unity of the system. But their relationship to the rest of humanity seems like that between a dog trainer and the dogs. They might make the dogs happy and teach them how to get on in role they have dictated for them, but that's it.

    And this is why I find no reason to be optimistic about the advances of technology in and of themselves. If all sense of virtue is hollowed out, if freedom is cheapened into a concept of mere "freedom from external constraint," and productivity and wealth becomes the metric by which the good is judged, it's unclear how much technology does for us. It seems capable of merely fostering a high level equilibrium trap of high consumption and low development.
  • Bret Bernhoft
    218


    My essential point is that advances in technology are inherently good. We can, in seconds, accomplish what would have otherwise taken countless hours. Such as analyzing 86,000+ lines of text about Norse Paganism. Which a simple Python script that I wrote can do.

    Here's a word cloud from that analysis:

    2000-top-two-word-phrases-in-multiple-norse-paganism-texts-low-res.jpg
  • Count Timothy von Icarus
    2k


    My essential point is that advances in technology are inherently good.

    That's what I thought. But what makes this good? Is something or someone's "being good" identical with "advances technology?" If so, then can we say that a human life is worthwhile and that a person is a "good person" to the extent that they contribute towards advancing technology?

    Or would it be more appropriate to say that advancing technology is good in virtue of something else? It's obviously much more common to argue the latter here, and the most common argument is that "technological progress is good because it allows for greater productivity and higher levels of consumption."

    Yet this just leads to a regress. Are productivity and consumption equivalent with the good? Is a person a good person living a worthwhile and meaningful life to the extent they are productive and consume much? This does not seem to be the case. We look to productivity and consumption largely because these are easy to quantity and welfare economics argues that these are good proxies for "utility," i.e. subjective well-being.

    This brings out another question. First, are productivity and consumption perfect proxies for utility or do they only tend to go with it? It seems it could be the latter as self-reported happiness and well being is tanking in the US even as that nation crushes all other developed countries in consumption growth and productivity gains. It is not clear that technology always increases utility.

    The assumption that it does has led to absolutely massive inequality between generations in the West generally, and the US in particular, such that Baby Boomers and those older hold a phenomenal amount of total wealth and political offices while also passing on a $33 trillion debt to their children and representing another $80+ trillion in entitlement liabilities that were not funded during their working years. We tend to discount investment in children and young adults because we are techno optimists. We assume that because the life of the average American in 1950 was much better than that of one in 1850, that the lives of Americans in 2050 must be vastly more comfortable than one living in 1950. This seems like an increasingly hard assumption to defend. Life expectancy and subjective well-being are declining year over year, and have been for a while. It has been over half a century since median real wages stagnated and productivity growth became totally divorced from median income growth (something true across the West, productivity gains only track with real wage growth for the bottom 90% in developing nations; this trend is almost 60 years old now).

    Second, it begs the question, "is utility good in itself?" Is being a good person and living a worthwhile life equivalent with levels of subjective well being? This is not automatically clear. It would seem to imply that wealthy hedonists live far more worthwhile lives than martyrs and saints. I would tend to agree with MacIntyre's assessment that moderns have a hard time even deciding what makes a life good because the concepts of practices and the virtues have been eroded.


    My point is that all these questions should call into question why exactly we think it is that technological progress is good and if it is always good. If it isn't always good, how do we engage in the sort of progress that is good?

    We can, in seconds, accomplish what would have otherwise taken countless hours. Such as analyzing 86,000+ lines of text about Norse Paganism. Which a simple Python script that I wrote can do.

    Sure. I used to focus on technical skills for my staff because one analyst who knew SQL, M, and DAX could do the work on 10 analysts with only basic Excel knowledge. But in virtue of what is making a word map about Norse Paganism good? It doesn't seem to be the case that doing all things more efficiently is good. We can produce car batteries with far fewer man hours today, but it hardly seems like it would be a good thing to produce 10 times as many batteries if we don't need 10 times as many cars, or if we're going to hurl those batteries into our rivers and poison our water table. The goodness stands outside the production function.
  • Pantagruel
    3.3k



    At what point does an advance become inherently good? For example, It has been shown that AI systems propagate the inherent biases of the developers who select the data used to train the neural networks. Most recently the example of the AI system designed to evaluate human emotion from faces that identified an inordinate number of black people as angry. So, for all intents and purposes, AI systems are mechanisms for perpetuating biases in the guise of science. How is that inherently good?

    The proliferation of digital technologies has fundamentally altered the way that people assimilate and utilize data. There is an apparent correlation between the rise of technology and the decline of IQ. Digital communication is quickly replacing personal communication. But digital communication is a poor substitute. People act differently behind a veil of full or partial anonymity. They are more aggressive, more critical. I'm trained as a coder, worked twenty-five years as a systems administrator and a systems engineer. I use my phone less than five minutes a week and plan to keep it that way.

    No, technology is not inherently good. Nothing is inherently good. The use that people make of something, that is what is good or bad.
  • Bret Bernhoft
    218
    I am also a techno-animist, so to me the technology goes beyond being only a tool. More like a gateway.

    For context, here are the two hundred most commonly used two-word phrases in the Christian Bible.

    200-top-two-word-phrases-from-bible.jpg
  • Bret Bernhoft
    218
    Another way of saying what this thread is all about, is to state that (in my opinion, experience and observation) atheist animism is the default worldview of humanity.

    Or you could refer to it as "animistic atheism". And that would be legitimate too.
  • Lionino
    1.5k
    Most recently the example of the AI system designed to evaluate human emotion from faces that identified an inordinate number of black people as angry.Pantagruel

    I understand this is simply an illustrative example for your sensible point, but yet on this particular reason programmer bias is likely not the reason. If the AI detects them as angry, there must a reason why, surely the programmer did not hard code "if race == black{emotion=angry}". It could be that the black people from the sample indeed have angrier faces than other races for whatever reason, but it could also be that the data that was fed to the AI has angry people as mostly black — though a Google query "angry person" shows almost only whites.
  • Pantagruel
    3.3k
    I understand this is simply an illustrative example for your sensible point, but yet on this particular reason programmer bias is likely not the reason. If the AI detects them as angry, there must a reason why, surely the programmer did not hard code "if race == black{emotion=angry}". It could be that the black people from the sample indeed have angrier faces than other races for whatever reason, but it could also be that the data that was fed to the AI has angry people as mostly black — though a Google query "angry person" shows almost only whites.Lionino

    Neural networks work through pattern identification basically. However there is always a known input, it is the responses to known inputs against which the backpropagation of error corrects. In the example I gave, the original dataset of training images had to be identified each as representing "joy", "surprise", "anger," etc. And the categorizations of interpretations of the images of black people were found to be reflective of the selection bias of the developers (who did the categorizing). And all AI systems are prone to the embedding of such prejudices. It is inherent in their being designed to certain purposes that specific types of preferences dominate (aka. biases).
  • Lionino
    1.5k
    In the example I gave, the original dataset of training images had to be identified each as representing "joy", "surprise", "anger," etc. And the categorizations of interpretations of the images of black people were found to be reflective of the selection bias of the developers (who did the categorizing)Pantagruel

    Did they? AI models typically use thousands or millions of datapoints for their algorithm. Did the programmers categorise all of them?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.