CCTV Smart Systems Twitter

This Months Best Cought On CCTV

Shared The Most

How to Vape Safely: 12 Tips and Tricks

Violent crime soars by 18 per cent as warnings raised on falling police numbers

NASA Updates TV Coverage for First Crew Rotation Flight on US Commercial Spacecraft

New year, new internet? Why it’s time to rethink anonymity on social media – David Babbs

January 2020 sees two significant steps towards the UK improving regulation of social media companies. The government confirmed in the December Queen’s Speech that it would legislate to tackle “online harms”, and is now expected to provide some more details of how it will take this forward. Meanwhile, in the House of Lords, an Online Harm Reduction Bill was tabled on 14 January.

It’s a good way to start 2020. A decade ago it was easy to feel optimistic, complacent even, about social media improving political debate. How could technology which enabled more people to participate, and more information to be shared, not lead to a more inclusive, better informed debate? Yet by the end of the 2010s social media was a big part of the public sphere but online political discourse was plagued by trolling, abuse, and disinformation. And whilst most of us have experienced some level of online abuse, or felt frustrated as political conversations get derailed by insults or inaccuracy, those who are already vulnerable or marginalised have been hit the worst.

This has grave implications for diversity in politics and for freedom of expression. A 2019 House of Commons report noted that amongst MPs, the most vicious abuse is directed “particularly towards women and minority groups”. Amnesty International, in a damning report into “toxic” levels of misogynistic abuse on Twitter, found that almost a third of all female users (32%) who experienced online abuse subsequently stopped posting content that expressed their opinion on certain issues.

In the first decade of social media, we hesitated to hold social media companies fully responsible for the rising levels of abuse, incivility and misinformation on the platforms they built and profited from. Discussion of how to address the deterioration in online debate tended to focus on bad individual users, and individual bad pieces of content, to the detriment of considering regulatory and design-level solutions. We exhorted companies to be better at moderating individual bits of content, or tougher with suspensions or “bans” on individual users. Individual victims were encouraged to be resilient, and to report and block their abusers. Everyone else was encouraged to “not feed the trolls”.

It’s many decades since we took a similarly laissez-faire and individualistic approach to offline public spaces. An architect designing a physical space is subject to myriad regulations which, whilst imperfect and not always properly enforced, at least recognise that regulation is needed to promote positive social outcomes such as public safety and public health. The bigger and more public a space is, the more stringent the rules become.

If an urban environment becomes a no-go area, or a road junction a hotspot for accidents, of course we look to law enforcement to tackle individual perpetrators. But we also ask questions about design and context and their effects on individuals’ behaviour. Do we need traffic calming? CCTV? A path? A park? Some trees or some lighting? What can we learn from what’s worked elsewhere? Social media is a large public space with an antisocial behaviour problem. So we should be asking these same sort of questions.

Clean Up the Internet is an independent, UK-based organisation concerned about the degradation in online political discourse, launched last year by Stephen Kinsella, a competition lawyer with a long-standing interest in human rights, digital technology, and democracy. One element of the architecture of social media which we think requires urgent consideration is the role and prevalence of anonymous, pseudonymous, and unverified users. Anonymity is a good place to start, because it contributes to both the main scourges of online discourse: abuse and misinformation.

There’s overwhelming evidence that social media users who feel unidentifiable are more likely to engage in rude and abusive behaviour. In one recent experiment, participants were randomly assigned either an anonymous twitter account or one which identified them. Anonymous participants were far more likely to create and retweet misogynistic content.

There’s also substantial evidence that the use of inauthentic accounts is an important tool for those wishing to create and amplify misinformation – a recent study found that the dominance AfD, the German far right party, on Facebook during the 2019 European Elections was fuelled by a “dense network of suspect accounts”, with tens of thousands of pro-AfD accounts displaying “multiple features commonly seen in fake accounts though rarely in real ones”.

Most obviously, if a platform lacks robust identity verification, it makes all other rules against abuse or disinformation less meaningful. How can a platform enforce a “lifetime ban” for a persistent abuser if they can simply create a fresh account with a new false name, using a new email address or phone number which can easily be acquired in a matter of seconds?

It is not necessary or desirable to “ban” anonymity outright. It is possible instead to restrict the misuse of anonymity without unduly constraining important legitimate uses such as by whistle-blowers or activists. This is because the use patterns for these benign uses of anonymity differ significantly from those abusing anonymity for toxic purposes.

Regulation should require social media platforms to demonstrate how they have designed an approach to managing anonymity with a view to minimising its abuse. Key components of that would be offering all users the option of a robust means of verifying their real name and location, and offering all verified users the ability to opt in or out of hearing from users who have chosen not to verify themselves.

Users should be free to choose to continue unverified, but other, verified, users should also be free to choose whether or not they want to hear from them. All users should be able to see who is verified and who isn’t, and judge for themselves what this might mean for a user’s credibility. A whistle-blower, with a manifestly good reason for remaining anonymous, would continue to be able to build trust, and a following, through the credibility of their content.

Regulation should set a framework, not stipulate the exact mechanics of how a specific platform’s verification process should work. Challenger banksdating sitescar rental sites, and insurance apps have all come up with their own solutions to offer millions of users ways to verify their identity online. Twitter did once offer a more robust verification option to a very select band of its users – the “blue-tick”. This is “currently on hold”, and was only ever available to figures Twitter deemed to deserve it through an opaque internal process. Only 0.05% of users are verified, most of them politicians, journalists and sport stars, and they have access to extra features, including a separate feed for content from other verified users. Facebook offers a more robust verification process for users running political adverts, which involves confirming real name and address, which Facebook checks by posting you a letter containing a verification code.

Of course, these verification processes generally require additional personal information or documentation from a user. Therefore an important additional focus for regulation and enforcement would be to ensure that any additional data is gathered is treated securely, used only for verification, and not retained for longer than necessary.

Historically, as technologies have scaled and matured, democratic forces have come together to ensure that those technologies deliver social goods, or at least take their responsibility for their social harms. For social media, so far designed by private companies for their private gain, with little regulatory oversight or regard for their overall social impact, such an overhaul is long overdue. Tackling misuse of anonymity would be no magic bullet, but it would be a sensible place to start.

Clean Up the Internet has written a lengthier draft proposal setting out all this in more detail. It’s up here, comments enabled, and we would love to hear your feedback.

This post originally appeared on openDemocracy and is reproduced with permission and thanks.



from Inforrm's Blog https://inforrm.org/2020/01/31/new-year-new-internet-why-its-time-to-rethink-anonymity-on-social-media-david-babbs/
via Security CCTV Installs

Comments

CCTV Installations