Are you being served or is Big Brother at large?


21st June 2019

This article was first written for the Western Mail, published on 20 June 2019.

With Uber set to ban passengers with poor ratings, Karl Foster, of Blake Morgan, explores the legal implications of businesses using ratings systems to ‘grade’, reward and penalise customers WBr.

WILL your Uber rating drop if you don’t tip enough, or talk too much, or little? Perhaps it’s something you’ve never thought about, but it might be time to consider your in-vehicle behaviour, after Uber announced it will ban passengers with consistently low ratings.

In a blogpost, Uber announced that those with a “significantly below average rating” will be warned and given “several opportunities” to improve before losing access to the Uber app. This has thrown the secretive nature of the company’s rating system in the spotlight, with critics suggesting it encourages discriminatory behaviour. Passengers have reported being docked after taking rides home with same-sex partners, and after rebuffing advances from drivers. Others have complained of having no idea what they did to deserve a low rating or whether a rating is linked to the tip.

Despite these criticisms, Uber is perfectly within its rights to impose this new system; with the exception of utilities – where there is a statutory obligation to provide services – businesses are generally free to determine who they serve, provided they do so in a lawful manner. A pub landlord can refuse to serve any customer, for instance. When customers order an Uber, they are agreeing to use its community of drivers subject to certain terms and conditions of use.

A good illustration of the legal issues at play here is a recent case that went all the way to the Supreme Court; the refusal of a bakery in Northern Ireland to make a cake with a message supporting gay rights. The case captured headlines around the world, and the ruling held that the bakery was entitled to refuse because they objected to the message itself, not the personal characteristics of the prospective purchaser. Of course, that would have fallen foul of equality legislation. The case reminds us that businesses have a great deal of autonomy in choosing who they serve, provided they cannot be proved to be using discriminatory practices.

In defending the new system, Uber said: “While we expect only a small number of riders to ultimately be impacted by ratingsbased deactivations, it’s the right thing to do.”

Regardless of how many riders are affected, this decision from Uber throws up food for thought for us all. Could this be the first step towards a dystopian future where our ability to access and benefit from tech-based services could be curtailed by our own perceived failures, resulting in low ratings? How is social influence and bias addressed? In this context, the key will be transparency and objectivity. Uber will need to maintain the reasonable expectations of both parties; a driver who values chattiness will downvote a passenger who values silence. Interestingly, Uber is already allowing premium passengers to select a “silent” mode to enjoy an undisturbed ride without worrying about their rating.

The power of ratings, as with big data generally, lies in the trust implicit in large volumes of feedback. We are more likely to trust a TripAdvisor rating average given by 1,000 people than 10. Uber says there will be controls in place to limit gaming of the system or to help address unconscious prejudice or bias. This is important as, for instance, if lower ratings were consistently awarded riders wearing burqas, it is easy to imagine a scenario where equality legislation could be breached.

Meanwhile, it isn’t just big business turning to ratings systems.

Imagine your behaviour being rated by your own government, affecting your ability to travel freely. That’s the reality for millions of Chinese people who have been blocked from buying plane or train tickets as part of the country’s controversial “social credit” system, aimed at improving the behaviour of citizens. According to the National Public Credit Information Centre, Chinese courts banned would-be travellers from buying flights 17.5 million times and from buying train tickets 5.5 million times in 2018.

The social credit system, which will eventually give every Chinese citizen a personalised score, aims to incentivise “trustworthy” behaviour through penalties and rewards. Authorities collected more than 14 million data points of “untrustworthy conduct” last year, including using expired tickets, smoking on public transport or not walking a dog on a leash.

Critics claim that authorities in China are using technology and big data to create an Orwellian state of mass surveillance and control.

While it’s a big leap from one ride-sharing app’s attempts to keep its drivers happy to curtailing a country of 1.42 billion people’s freedoms, it’s not difficult to grasp the long-term implications of awarding ratings to citizens. Neither is it hard to imagine the various ways it could become the standard way of doing business. In many respects it already is, with credit scoring a long-established route towards securing a mortgage, for instance. Equally, the police have trialled use of profiling via its Harm Assessment Risk Tool (HART) in determining risk in custody decisions using credit scores as a data input.

The debate around big data and our individual right to privacy continues to rage, yet millions of us gladly hand over our details to use services like Google, Facebook and Amazon.

It’s easy to see why we might feel wary of being “scored” and treated accordingly when it comes to the services we rely on. Soon life could imitate art; anyone who has seen the episode of TV series Black Mirror, where people are rated for all of their daily interactions, will recognise the potential implications.

Any algorithm can be subject to accusations of unconscious bias and rating systems are no exception. Technology companies with proprietary AI algorithms and the companies behind them currently set their own agenda, which begs an important question; as this technology increasingly pervades everyday life, is there a role for the state to ensure our data doesn’t end up being used against us? ¦ Karl Foster, head of technology at Blake Morgan, is a commercial lawyer specialising in technology, media, telecoms and financial services.

Enjoy That? You Might Like These:


articles

19 March -
How do you grow a business? Not an easy question to answer as there are so many elements for start-ups to consider, but taking on board advice from those that... Read More

articles

8 November
We will have all heard of artificial intelligence (AI) by now. But what actually *is* AI? And why is it unique from a legal perspective? In this article we delve... Read More

articles

10 October -
The Cabinet Office has produced a new procurement policy note (PPN09/23) outlining the new updates to the Cyber Essentials Scheme. Read More