Lots of people consider online forums and social nets are unsafe and bitchier places and don’t feel they could have a civil and respectful conversation here.
Except that online abuse is a problem for ordinary users it is a genuine commercial treat for big corporate players, such as Twitter, Instagram and online publishers.
They reliant on people spending more and more time online, but few users and fewer advertisers enjoy being among furious people spoiling for a fight.
The way of using censorship could inherently bring to creation of bland, beige online spaces where free speech goes to die.
The idea of computers being able to judge offensiveness now really doesn’t works outside the movies.
The algorithms that can detect threatening patterns of speech have problems with sarcasm, irony and the sheer range of human annoyance.
Another approach is to build a system that tend to produce constructive criticism and harmony, opposed to negativity and bulling.
In support of this idea at one forum an experiment has been carried on: before posting a comment users must rate two randomly selected comments from others for quality of argument and civility and then rewrite their own comment if they wanted.
As a result the personal attacks, name-calling and abuse have gone.
It may not deter hardened trolls but evokes the sense of social inhibition we feel in real life when asked to speak before an audience.
Another example is Uber taxi’s app’s rating system, which asks passengers and drivers to give each other star-rating.
The company doesn’t spell out the results too carefully because passengers go to surprising length to keep a good rating without really understanding why it matters.
But invoking sense of being watched isn’t the only way platforms subliminally encourage social behaviour.
Few years ago Facebook managers noticed a rush of complains from users about friends posting photo of them that they didn’t like.
This complains were invariably rejected because no rules had been broken.
Managers tried to saying, “Why don’t you just message the person?”, but people didn’t quite know what to say.
So the complainants get a template message to say their friend, explaining how the picture makes them feel and asking politely for its removal.
It is a classic example that humans learn by imitation - don’t explain why, just show them how.
At recent plans in Facebook is to undermine jihadi propaganda by “organized niceness” like it had been done in one German Neo-Nazi group – it was swamped by messages of inclusivity and tolerance.
It is hard to imagine that it could help but just shutting offensive accounts is necessary but not enough.
Why not mobilize the vast majority of reasonable human beings to marginalize what is really a tiny but disproportionately noisy minority of extremists?
The question is not learned enough but it is obvious that if extremists seek to spread fear and shock, counterspeech might aim to make them look small and ridiculous by using humour and warmth.
The interesting fact is that people who had just joined the Twitter or Facebook usually looks what the others had written working out “What should I say?” before post their own comment.
The infamous “broken windows” thesis works in net like in all other environments, but much faster: the small incident quickly creates the impression that everything goes like this, and encourages serious problems.