• Christoffer
    2.1k
    In 2015, China rolled out a test system for improving good behavior among citizens of test communities. It's now been four years and there are some observations that can be made from this as seen in the video below.


    If we remove the fact that the communist regime most probably took the "charity" money and that their corruption causes further problems with this system, would this system work in a functioning democracy?

    The biggest questions to be answered are: How does a social credit system react to when you protest something? What happens if society needs something to progress that won't do it without challenging the established system?

    In a democracy, these things are essential in order to progress as a society. And what happens when you reach a high credit score and reap the benefits? What happens if you stop doing good deeds when you already have the essentials you got from that high credit score?

    This would probably (and in an ironic turn for a communist regime) bring about the "master/slave-dialectic" scenario, in which the ones with a low credit would act as much as they can to raise that credit, which in itself is a positive push the credit system brings with it. But the ones who already got to a high credit would stop seeing a benefit by doing good deeds, they already reap the benefits and can easily rely on the low credit people to slave under them. Which staggers their momentum while the low credit people will turn themselves into people who automatically act out of goodness and therefore most likely reach the masters of high credits. The only thing that might stop this is that the low credit people would have fewer tools to advance while high credit people would have the technology to advance even further.

    But if we change it into a rubber-band system, in which differences in credits slowly tick back to its 1000 point normality, what would that bring about? If you go lower and lose benefits you can either do good deeds in order to return to 1000 and continue further, or you could for a long time wait it out and then return to 1000. This would encourage doing good deeds, but the time to wait would also work as a punishment of the bad deeds you've done. And for those with a high credit score, they can't lean back and reap the benefits for too long either, since their score ticks back to 1000. The only way to keep having a higher credit is if they do good deeds themselves and one way of doing this is to help low credit people to get back to 1000 by giving them opportunities to raise their credit.

    While it's easy to dismiss this system as totalitarian or reducing the freedom of people, is there a way to have such a system without it being an invasion of freedom as it looks like in China?

    The biggest question I get out of this is; what good is a moral system if the consequences and benefits of being morally good are vague and not a guarantee to give you something back? A moral system cannot work if all it does is benefitting other people while you suffer or if it only benefits you and not other people. There has to be a balance that benefits both you and others when you do good deeds and punish you when you cause harm, without being enforced and decided by other humans.

    To turn an eye into the future and the possibility of such a system. Imagine an A.I that tracks all basic acts that people do and keeps track of their mental health while doing it in order to evaluate how the acts affect both the individual and others. It then keeps track of a credit score system that works like the rubberband-version above. The moral system used as a blueprint for how to value these scores are whatever that society has as its common moral system; if laws are changed, if the general ideas of morality change, the system change with it and pushing for a change doesn't affect the credit scores. Meaning our discussions, opinions etc. are still free in order to question the status-quo, only actual acts change credit scores.

    Would such a system be able to positively force people to act better? What would be the drawbacks? And how is it really different from the actual flawed abstract credit score we socially have in today's free democracies?
  • Baden
    16.3k


    Nice OP with many interesting issues raised :up: Just to make the observation that the ultimate point of the social credit system in my view is not to force people to behave this way and that against their will, but to change their will so that eventually they think they're behaving of their own volition in accordance with the system. Which is what's so pernicious about it. A social subject creation mechanism in effect. So, I'm sure the Chinese will be delighted with the impression that it's just a transparent attempt at oppression that people will surreptitiously oppose by trying to game the system rather than being a system that will become so transparent it will game them.
  • Christoffer
    2.1k
    A social subject creation mechanism in effect. So, I'm sure the Chinese will be delighted with the impression that it's just a transparent attempt at oppression that people will surreptitiously oppose by trying to game the system rather than being a system that will become so transparent it will game them.Baden

    This is why I think the Chinese system, especially since it exists within a highly corrupt and totalitarian political system, won't work at all.

    But in a free democracy, as per the A.I system, which might sound like science fiction, but if pushed for would take just a couple of years to be implemented when the technology is ready, might actually work. The big thing though, is that it's hard to quantify the drawbacks to have such a system within a free democracy. Where is the line between progressive disobedience and bad moral acts? As an A.I it needs to, in some way, calculate the benefits of such disobedience and decide if it's for something good or something bad according to morals not present in the current system. It needs to, for example, understand when racists act disobediently with promoting a racist society while promoting for a change in health care is good for the people. Adding parameters like this might be easy for an A.I to adapt to, but it's hard for us to find the balance that is right.

    Outside of the A.I example, what might be a real-world implementation of a rubber-band score system? Is there a way to actually have this in a free democracy, without it becoming totalitarian like China or like the Black Mirror episode "Nosedive"? I think it might, but it needs a serious overhaul in comparison to those to extremes in order to keep society free and healthy.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.