Image courtesy of Damian Gadal under Creative Commons licence
30th June 2019

Why you should not pay for “free” online services wih your personal privacy

If you are one of “those people” who respond to any discussion around online personal data privacy concerns with the statement “Well, I don’t do anything wrong so why should I care if Big Brother is watching what I do?” this article is definitely for you.

Even if you are becoming concerned after recent scandals such as Facebook’s involvement in releasing personal data about its member to Cambridge Analytica – which may have been used to influence elections on both sides of the Atlantic or you have asked Google the question “Where was I at 10:15 on 20th June 2016 and who was I with or near?” and have been horrified to find it can gve you an accurate – if not always correct – answer and are becoming aware of just how much of your personal life and private actions are known – and spread around across who-knows-how-many shady companies and organisations you may find this article interesting.

Whether you bother to read it and do anything as a result is entirely up to you – after all most of us still live in free societies and are allowed to make our own choices – for now. Of course, anyone managing to read this from within China has no such choice (See Forbes magazine article at and soon, if democratically elected governments get their way, you will have no choice but to have every aspect and activity of your life tracked and monitored.

Let’s get a big fallacy out of the way

Most people with little to no knowledge of how computers and software work will happily take any information that comes out of a computer as fact.

This is a dangerous belief – especially when those who believe it include police, judges and politicians.

I’ll give a very simple example. Back in the 1990s I was stopped by UK police on a motorway and accused of breaching the 70 MPH speed limit by a not inconsiderable 47 MPH – in other words I was accused of driving at 117 MPH. As the car I was driving at the time was a Saab 900 whose top speed while carrying no more than a driver under ideal test track conditions was only 102 MPH – and at the time I was returning from holiday with my then young family filling every seat and the car stacked to the roof, every underfloor compartment full of a month’s worth of holiday paraphernalia – and on top of the car sat a large, very non-aerodynamic luggage box equally full of “stuff” I tried to explain that it was impossible for my car to travel at that speed – especially as at the section of road over which they said they had measured my speed I had just pulled out from behind a lorry travelling at 55 MPH after allowing faster traffic to pass.

The response from the two police officers was “Well, our computer said you were going that fast and you can’t argue with a computer so we’ll be having your licence please!

The computer they were referring to was a device called VASCAR – a very simple computing device that calculated speed by measuring the time taken to travel between two previously entered points and as speed = distance / time the device gave an immediate readout of the speed of the vehicle just measured.

As the officers became increasingly agitated while I tried to explain to them that something was wrong I eventually asked them to show me the time and distance measurements used by the VASCAR device (a legal right in the UK) – which made them very angry … they accused me of “wasting police time – a criminal offence” and suggested they would “cart me off to the nick, leaving my family stranded on the safety shoulder of a busy motorway without protection“. Such was the strength of their belief that “computers can’t be wrong”.

There is a saying in computing circles that goes back to the dawn of the industry – Garbage In, Garbage Out or GIGO for short. Basically translated this means that if a computer is fed incorrect data it will reliably and accurately though not correctly produce faulty results.

In the case of these police officers, instead of following the correct procedure for using VASCAR which was, at the start of each shift, first to use the device to measure a known distance and correct for errors caused by tyre pressures or wear and then drive between the two points that were to be used for measurements measuring the actual distance between them with the recently calibrated vehicle – directly into the VASCAR device.

So, having taken photos of the time, distance and speed readouts from the VASCAR I wished the officers good afternoon and went on my way. Over the next week I drove back and forth along the stretch of motorway (it happened to be on my regular route to work) measuring each time the distance between the two bridges the police had used. Though my car’s odometer was neither calibrated nor particularly accurate to read my measurements consistently showed a distance approximately 40% shorter than the police VASCAR unit’s readout.

I disputed the alleged speeding offence and eventually ended up in a court to defend myself. The police were so confident the VASCAR “evidence” was irrefutable they didn’t bother to show up. Probably just as well – as armed with the photos I had taken on the day of the VASCAR readouts, some very large scale Ordnance Survey maps of that stretch of motorway and a ruler I was able to show the court that the true distance between the bridges was a staggering 45% shorter than the distance used by the VASCAR unit. One simple calculation and the truth became clear – I was actually travelling at an average speed of 64.5 MPH between the bridges – entirely consistent with the report I had given at the time. My case was immediately thrown out – followed by the quashing of fines issued to dozens of other drivers who had been pulled over and accused of speeding by the same officers on the same day – each of whom had swallowed the line that “a computer doesn’t lie – and can’t be wrong“.

I eventually learned that among this particular police traffic unit its officers had decided that the calibration of the VASCAR unit followed by a new measurement between the points to be used was too much bother by far. One officer had taken it upon himself to jot down the distance readings from his car’s VASCAR unit for all the popular “speed trap” points they used – and gave copies to all the other officers. So, instead of measuring (even vaguely accurately) the distance between the two bridges the fine wielders of authority who stopped me and dozens of other motorists that sunny afternoon simply dialled-in to their VASCAR unit a distance setting read from the sheet passed around the traffic unit. Unfortunately for them, they used the number for the wrong pair of bridges along that stretch of motorway. Hence – GIGO … their VASCAR unit spent a few hours spitting out speeding tickets to entirely innocent motorists while they applauded themselves on the fine work they were doing to keep everyone safe from idiots who cannot understand that SPEED KILLS! (don’t get me started – I’ll just say that if that statement had an ounce of truth we should all spend our lives entirely stationary and live forever).

So, if a simple computer cannot be relied upon, what happens when we scale to an AI driven monster sized computing cluster?

Simple. Not only does the same principle of GIGO apply to these machines and the algorithms they run but it has proven to be almost impossible to “train” one without even slight prejudices in the mass of training data fed in to them sending them off into quite extreme positions.

Which hasn’t stopped Google, Facebook et al deploying such machines in their never-ending pursuit of profit. As I wrote to a friend recently, the sole purpose of Google’s and Facebook’s activities is to parcel people (including you if you fall within their data hoovering clutches) into “lists” that they sell to organisations who want to sell you something, sway your political views – or target you for hate crime. Neither organisation worries itself too much over the accuracy of these lists (eg; whether an individual should actually be included or not) or what purpose they are used for – as long as someone wants to buy them.

Remember that these lists result from the private and personal data that users of these companies’ services allow them to gather after being attracted to the shiny gadgets, services and apps they provide – without bothering to read the (admittedly long, multi-page, fractured and densely legalese) contracts that say, in short (for your benefit) ALL YOUR DATA BELONG TO US.

I’ll talk about the dangers that arise from giving away your privacy in a moment. But, having established that (a) you are paying for the services you use with your personal data (b) it is worth asking – Is your privacy worth anything in monetary terms?

Last year Google earned $116 BILLION just in advertising revenue based on what it knows about you. Facebook reported revenue of $59 BILLION in the 12 months to March 2019 – an increase of 32% year-on-year – despite all the scandals that have rocked the company in the period.

So, there is part of the answer – your personal data produces ~$175 BILLION per year to just two companies exploiting your privacy. Add in all the shady data brokers and other personal data harvesters and trackers who mostly fly well under most people’s radar and your personal data and you have an industry fast approaching revenues of a $TRILLION each year – all from what these companies can get to know about YOU.

How do these companies collect your personal data?

The ways in which all data harvesting companies operate are pretty similar and well documented so I won’t repeat them here other than to add a few missing pieces that aren’t covered in most online articles.

The Pingdom article “How Google Collects Data About You and the Internet” at and the Salon article “4 ways Google is destroying privacy and collecting your data” at reveal the main ways in which your personal data is gobbled up.

If you would like to scare yourself witless, follow the instructions in the CNBC article “How to find out what Google knows about you and limit the data it collects” at to discover what Google (admits) knows about you. The map that shows everywhere you have ever been since you first logged into a Google service from your mobile phone is normally enough to cause most people an attack of the colly-wobbles.

The articles linked above reveal only part of the picture of how – and how deeply – you are tracked. To understand more …

  • First we must look at the apps installed on your phone. Phone operating system manufacturers (essentially Apple and Google) have been slowly forced to provide controls over the permissions individual apps have to access the sensors in your phone. These include sensors for location tracking (GPS, Wifi, Bluetooth, micropohone, inertial movement etc), listening (microphone), watching (multiple cameras) and your ID through various device identifiers. If you haven’t already done so I really recommend you check why (for example) that “free” weather app you’re so fond of needs access to you device identity, microphone and contacts.
  • Shopping malls and individual retailers offer free WiFi for less than altruistic reasons. Whether you connect to it or not, your phone – if WiFi is left turned on, is constantly seeking possible connections – and in doing so exchanges one of its unique IDs (its WiFi device MAC address) with every WiFi point it comes near. These IDs are happily hoovered up by the shopping mall – and used to look you up in a database (because hundreds of personal data tracking companies know who you are and the full range of IDs inside your phone) thereby knowing exactly who is in the shopping mall.
  • It gets worse. No single WiFi access point could cover an entire shopping mall so multiple access points are installed throughout the building. Altruism? Nope! Using WiFi triangulation (in short, how strong the signal from your phone is when picked up by several WiFi access points) allows the mall owner to know exactly where you are in the mall – which shop you are in or whose window you are looking at.
  • So, not only does the shopping mall know who you are it knows where you are.
  • It gets worse. Individual stores use several technologies to not only identify precisely who is visiting their store but precisely which department or counter they visit. These technologies use not only WiFi triangulation in the same way as the shopping mall but shorter range “beacons” that use Bluetooth, ultrasound or NFC (Near Field Communication) to identify your presence at a counter or department by “pinging” the Bluetooth receiver, microphone (ah, so that’s the reason the weather app wants to access your device’s microphone – which can ‘hear’ frequencies well outside the human hearing range) or the NFC transceiver embedded in your phone or credit cards if you have ever used one to purchase something in the store. And, if you do buy something, that purchase is recorded alongside your identity, credit card details and all your phone and credit card IDs the store can grab. All without your permission.

I could go on to talk about web tracking cookies, single pixel image tracking, screen grabbing scripts and all manner of other invasive and very nasty technologies used to steal your personal data (who you are,where you are, where you have been, who you are with, what you are doing … the list goes on and on).

But let’s just look at one more increasingly common and rightly scary technology – facial recognition.

Facial recognition – is it good or evil?

If you read the article linked at the very start of this piece about the dystopian combination of technologies used in China to control its population to almost “thought control” levels of behaviour, you will have seen that facial recognition is being widely deployed as part of the universal surveillance machine China’s government is attempting to construct.

Small problem. Facial technology in its most advanced for available today doesn’t work.

So what, you might say – silly Chinese for wasting their money.

Not so fast – there’s a lesson here which gives valuable insight into the dystopian world we are sleep-walking into. Because it is not just China that has deployed facial recognition throughout its cities – they are merely the most ambitious users of the technology.

If you live in the USA or Europe and walk the streets of any major city, pass through any large railway station or airport those “security cameras”, so ubiquitous you pay no attention to, are almost certainly connected to some form of “AI driven” facial recognition system. So, do these things do any good or should we be troubled?

I think we need to be troubled. At the moment, the technology is a waste of money. For example, in the UK several police forces have deployed the technology both statically in city centres and at events such as large gatherings (eg; the Notting Hill Carnival in London) and at perfectly legal civil protests.

Why is the technology currently a waste of money? Because it doesn’t work. The UK organisation Big Brother Watch recently submitted a number of Freedom of Information requests to police forces across the UK. Before revealing the responses take a look at what London’s Metropolitan Police (“the Met”) have to say about the technology at A nice, reassuring explanation – all for our protection.

Now for the reality. The response from the Met can be seen reported at iNews article “Met police’s facial recognition technology ‘96% inaccurate’” at which also discusses some of the breaches of personal privacy (the technology currently breaches European GDPR legislation as, despite the Met’s assurance that they displayed posters wherever they deployed the technology no attempt was made to obtain consent from a single individual to having their very personal data (their face) recorded and stored in a database for up to a year or longer and one man who did verbally object to having his face recorded was arrested – but why should a police force bother about complying with the law) the technology brings.

The BBC described the use of facial recognition as “Face recognition police tools ‘staggeringly inaccurate‘” at and went on to report that its use in London had incorrectly identified 102 people as potential suspects. The Met assured the BBC that nobody had been arrested but failed to mention that 102 entirely innocent people had been harassed and accused of crimes about which they knew precisely nothing.

In Wales, the police managed an even bigger result. Their system, deployed at an international football match, managed to falsely identify 2,000 people as wanted criminals. Showing blind stupidity (that beliefe that computers can’t be wrong – again!), the force blamed the poor quaity of images provided by Interpol and UEFA for the high number of false positives.

GIGO – remember? But now escalated from a potential speeding ticket to potential arrest as a known football hooligan – when all you had been doing is innocently spending some leisure time attending a football match.

So, right now I’d say that facial recognition technology is a positive danger to people simply going about their legal business and agree with UK Information Commissioner Elizabeth Denham when she said police had to demonstrate that facial recognition was “effective” [and] that no less intrusive methods were available going on to say “Should my concerns not be addressed I will consider what legal action is needed to ensure the right protections are in place for the public“.

The future of Facial Recognition technology

Facial recognition technology will improve with time. The Chinese are not being silly. Once the cameras are in place (and never mind China, the UK has more CCTV “security cameras” in operation than there are people in the entire country – making the UK the most closely watched population on the planet) and the recognition technology improves we can be tracked and monitored even if we choose to leave our mobile phones, connected watches, fitness trackers and all the other devices that currently secretly monitor us at home.

In shopping malls and stores, why bother with all that WiFi / Bluetooth / ultrasonic / NFC nonsence when a couple of CCTV cameras can do the job just as effectively.

You can have your own views about Google Earth – the zoomable views taken from satellite imagery covering most of the planet. Maybe you’re happy to show off your property to the world – maybe you object to your expensive car collection being shown to every crook on the planet. Whatever.

But, how do you feel about satellite technology that can watch you and recognise you the moment you step outside your home or place of work and then follow you in real time while you go about whatever it is you want to do. Fantasy? Read this article from the MIT Technology Review to understand that the technology to do just that is almost certainly in operation and it is only government restriction that prevents the necessary level of detailed imagery being made available to commercial interests. But that will eventually happen.

So, what happens when Google starts offering “Google EarthTube” or whatever they might call a live video streaming service capable of zooming into any spot on the planet?

I live in the south of France where the climate is pleasant and it it is not unheard of for people to strip off and do a bit of sunbathing in their own large gardens … (and here’s the important bit) … in full expectation that they are doing so in private. As private as if they were in their bedrooms.

What price privacy then? Will people still be prepared to behave as they wish – strip off to collect some vitamin-D? Or will their behaviour change with the realisation that millions of 15 year old boys will be watching.

Do you still not mind giving up your personal data?

If the way that personal data and a devil’s brew of personal data stealing technologies is currently being used in China – not to just keep its citizens safe from harm or combat terrorism – to actively control the thoughts and deeds including such private matters as religious beliefs and sexual preferences doesn’t scare you and the trials of facial recognition and unannounced deployment of the technology in other public spaces in democratic societies offers you no concern then please stop reading and accept my apologies for taking up your time – happy future nightmares.

Did we ask for our privacy to be taken away?

No we did not. Nor were we informed in any realistic way that it was being removed from us.

And yet it is being removed at frightening speed, without our consent and by people who don’t even understand the basic workings and limitations of the technologies being deployed – let alone what happens when the mass of data that this combination of mass surveillance (spying) tools gets thrown into a heap and some “AI” is set the task of making sense of it all – because, believe me, that data pile is far too big for any group of human minds to organise, sift through and get any “results” from.

To understand the dangers, I will examine just one tiny piece of the massive data pile and ask

Why is everyone so keen to get their hands on all my contacts?

Let’s examine what can be done with nothing more than one person’s list of contacts.

The collection of contact data is conducted for numerous purposes. First, just accept that unless you have been extraordinarily careful and vigilant with the apps installed on your mobile phone, the social media sites that ask for access to your email – or just use Google services – you and all of your recorded contacts are out there, in the wild, waiting to be used for some purpose you might never consider.

All the smoke screen of “advanced AI” and “super-intelligent machine learning” peddled by tech companies is just so much hot air and that what actually happens in their algorithms is a very crude probability matrix – as biased from the outset as the people who “programmed” it and set its parameters and you may start to understand this:

  • Take a look through your contacts list – if like most people you have collected over time names and phone numbers of people you know only peripherally (your dentist, the guy that services your boiler, members of your sports club, business contacts …) ask yourself how an algorithm determines your relationships and “weighs up” the strength/value/nature of any given contact to you.
  • Even if it gets access to your call history (something else data brokers are very keen to get their mucky hands on) there is almost nothing of import that helps determine the nature of your relationship with any particular contact. To illustrate, some of the most valuable (to me!) contacts in my contacts list are people I went to school with and have known for over 50 years. But these days, living far apart we have no need to talk frequently to arrange get-togethers and it’s likely that I phoned my boiler guy more often last year than I phoned any of them – in fact I doubt I called them at all from my mobile.
  • So let’s look at another angle to “measure” a weighted value of a given contact. Do you appear in THEIR contacts list? Now it’s very likely that I will appear in the contact lists of friends I have known for over 50 years – and they will appear in each other’s list … a “network” is forming. But I probably get stored in the contacts list of my boiler guy – simply because I am a customer. I happen to know that I have recommended his services to several of our local friends and they have become customers. Now we have another network – all those friends are in my contacts list and in my boiler guy’s list and he in theirs. So, what’s a poor, dumb algorithm to do?
  • Dive deeper, of course! Consider yourself as the root of a tree – every single one of your contacts is a branch of your tree. Now along comes a spider and spins a web connecting all the branches where their contact list contains your details. Getting a bit complicated? Nope – we’re not even started.
  • When the algorithm looks into each of your contact’s databases it finds a whole bunch of other contacts that may or may not be in yours – say someone as innocent as another member of a sports club you both belong to. The algorithm sets about the task of making connections – so if another sports club member appears in one of your contacts contacts list … and he/she was once given your contact details because he/she wanted to challenge you to a squash/golf (insert your favourite sport) competition but never got round to it – a more strongly weighted connection is, nevertheless, made between the two of you.
  • The algorithm is, of course, entirely devoid of the knowledge that you’ve never even met this third person.
  • So … here we are, only one step removed from your contacts list and, returning to the tree picture I hope you still have in your head, we are already into n-dimensional territory (hint:just imagine the web of connections between people who pop up at random in random contact lists and the picture should appear)
  • Now … here’s where it starts to get really scary …

Our (actually their) poor, dumb algorithm has a massive web of connections of people who – to the best of its witless knowledge – are somehow connected … even though it has no way of differentiating close friend from commercial service provider.

Doesn’t matter.

These companies are in the business of parcelling people into saleable groups so when an “advertiser” asks for all the people who might be interested in ex-pat financial services living in France they can sell a “list” and rake in the money. So the next task is sorting all these people and connections into groups … somehow.

So … ask yourself … “am I feeling lucky?”

Because if you ask yourself a second question – “what / how much do I really know about the people in my contacts list?” you can begin to see how the cards are stacked against you and get an inkling of an insight into the gross dangers these misuses of technology inevitably lead to. And why, whenever people stand in front of me and say “privacy? huh! I never do anything wrong so I don’t care if someone is tracking me everywhere I go, watching everything I do and monitoring everyone I know or speak to” I get an overwhelming urge to (metaphorically – I don’t have a violent bone in my body) beat them about the head until their common sense wakes up.

Danger – your contacts meet your contacts contacts!

In short, there is a very large likelihood that amongst your contacts there will be people with a criminal past. Very probably unknown to you but certainly known to the data brokers  Equally, you will have contacts in your list with all manner of secret perversions and interests of the kind they wouldn’t want their mothers knowing.

So … all this data gets fed into an AI algorithm (if it wasn’t so serious I’d laugh – instead I find myself crying!) programmed (with all sorts of assumptions and biases that affect the outcome even before it has looked at its first byte of data … and it is tasked with drawing together “probabilities” (aka “likelihoods” or more simply “stabbing a guess“) at how you are connected to other people in your contacts list – and they to others in their contacts list and the n-dimensional networks that then form.

The actual task, remember, is to place you in a group that forms a list that can be sold.

So we return to the poor, dumb algorithm – which sees all these connections but has no way of weighing them up.

Bigger danger – here comes more personal data collected about you

An “AI” (actually just a bigger, more complicated algorithm) mashes up the n-dimensional contacts database with the even bigger data set that contains the data about everywhere each person (lest we forget, we are talking about living individuals here) has ever been, everything they have done, every web page they visited, every phone call they made, to who those calls were made and a “profile” (itself n*-dimensional) is constructed allegedly “describing” them, their supposed (guessed at) interests and activities in great detail – the better to form the (biggest possible – this is multi-billion $ commerce remember) list an arbitrary advertiser – or other enquirer – might be willing to pay for.

How does the AI generate the profile?

Let’s imagine that your contacts list contains (I hope unknown to you!) a paedophile. That person and their perversion is unknown to the police, their family and the community they live in. But their activity and every perverted step they take on-line is watched over by the data gatherers.

So … go back to the start. The first algorithm (that built the n-dimensional contacts list but had no idea how people were connected one to another) is asked by the “AI” ‘who else does this pervert know?’ … and your name pops up. The question actually returns an n-dimensional list (simple analogy = a 3-dimensional web – but actually in many more dimensions) which now throws up a significant number of individuals with alleged paedophilia interests. It is highly likely that you will appear in the contacts lists or have some other connection to a significant proportion of this group of people. It “follows” (see? inference = ‘proof’) that because you have connections to so many people in the artificial network known to have paedophile interests you are likely to have paedophile interests too.

Don’t believe me? Cast your mind back a few years when the game of “6 degrees of separation” was the fad of the day. Stated simply, a connection can be made between any two arbitrary individuals among the entire planet’s human population in 6 hops or less. Essentially Jim knows Sally who knows Ben … who is best mates with the President of North Korea. Fascinating game when played that way round.

Scarily harmful when used by idiots (sometimes given the name “AI machines” sometimes known as “policemen” … see

Start at Wikipedia and dig away until you find out what really went on … and, sad to say, still goes on to this day. On your journey take note of the 33 innocent men who were forced into suicide and the hundreds of others whose lives and livelihoods were ruined the minute the Met Police broke their door down in the middle of the night, lost their families, saw their children snatched into care, lost jobs, homes, professions or licences to work – were largely blokes like you and me … innocent working men or professionals whose only “crime” had been to pay for an entirely innocent (I’m talking Popular Mechanics – not even Playboy) magazine subscription on-line.

In fact, just the sort of people you would expect to be ‘net-savvy and wish to read a broad range of international journals. Good upright citizens who never “did anything wrong” and would never wish to (metaphorically) harm a fly.

But bias and misrepresentation of data turned them all into paedophiles.


Of course, quite apart from the publicly judged and proven errors committed by the Met that directly led to all the suicides, broken families and ruined lives the biggest factor of all never gets mentioned.

Not a single Plod thought to apply a reasonableness test to the “Gold Mine” of data handed to them nor even question its provenance or reliability. After all, a computer had produced it so it must be right. Right?

No, WRONG! We’re almost back where we started.

The data had been found on the computer of a gang of crooks operating a money skimming scam and ONE idiot young Texas cop decided that as the skim involved charging a small amount of money through an Internet portal gateway – on the other side of which lay some very dodgy web sites – all the names identified by the credit card numbers skimmed must belong to people searching for child pornography … therefore he had a list of paedophiles! And this is how the list of ove 7,000 (mostly) entirely innocent British men was handed to the Metropolitan Police – as a list of more than 7,000 British paedophiles. The Met reacted as if all its Christmases had come at once – especially as the list contained numerous well-known public figures including entertainers, lawyers, medical professionals, poiliticians and even a few High Court Judges.

So … computer generated (in this case just stored credit card numbers) data was turned into false information – not by a computer or an algorithm – but one stoopid young Texas cop with a surplus of time and imagination and a complete absence of common sense.

Back to the AI and its profile building. Though you (I hope!) have not a single paedophile bone or thought in your body, the data brokers will happily include you in a list they are happy to sell to anyone (government or blackmailer) who comes knocking waving a wad of cash asking for a list of paedophiles. More names on a list = more money in their coffers.

Doubt me for one moment and you really haven’t dug deep enough into Operation Ore.

There is more, much more. But as we head inevitably toward a Big Brother state it’s only going to get worse.

The only difference between the actions of the Nazis and the “opposite side” Stasi that followed them – and an “AI” is that the AI reaches the wrong conclusions a million times faster.

But, hey, what’s the fuss about? The Americans have been doing this stuff for years. As long as we’re not beardy terrorists and aren’t engaged in criminal or anti-social activities we have nothing to fear. Right?

Er … what happens when someone changes the definition of “criminal” or “anti-social” (like the Chinese have done) or treats the haul of data (available to a filing clerk inside a town hall near you) in a similar manner to the trivial (by current standards) haul of “gold” handed to Operation Ore?

Did someone just mention a slippery slope?


We all do things that we consider private – things that we may take to the grave with us. It doesn’t matter if it’s as innocent as going to a specialist store to buy a present you don’t want the recipient to know about until the big day comes round or you phoned in sick and played a round of golf instead of doing the day’s work. It’s between you and your conscience.

Now, imagine a world without privacy.

How will your behaviour change when everything you do is being watched by somebody? Everything you do monitored and turned into a guesswork profile that, according to the prejudices of whoever looks at it could turn you into a criminal or sex offender or just someone who isn’t going to get that job or promotion you want?

What will your life be like when your government follows the Chinese model and start issuing “social scores” to rank your citizenship value – in the process treating you like you might train a pet, as someone who gets a bonus for doing or thinking whatever those in charge approve of and face stiff penalties for expressing the wrong view or just being facially recognised because you happened to walk close to a random protest rally? How will you celebrate when your livelihood, home and family are taken away from you because of one drunken post on social media?

How will you feel when your actions and thoughts are constrained by whoever has control of the big surveillance machine and so gets to decide what you can think or express and what is deemed unacceptable.

The level of power and control on offer by mass surveillance that robs everyone of their privacy is actually every politician’s wet dream come true. And – as is the nature of the beast – once started on the slippery slope the addictive drug of control will inevitably lead to ever more stringent definitions of “right” and “wrong”. A political party wants to stay in power. Set the machine to reduce the social score of anybody that expresses a view not in line with the party’s thinking.

Fantasy? Wake up – it’s happening today in China – the world’s most populous country.

I do nothing wrong so I have nothing to fear.

If you still believe that then you deserve all that’s coming your way.

Add Comment

Your email address will not be published. Required fields are marked *