DiEM25 Official Forums

 
User avatar
aral
Topic Author
Posts: 38
Joined: 19 Jun 2017 17:20

Introducing the problem

17 Jul 2017 06:38

A democratic, sovereign Europe requires democratic, sovereign technology. We must move beyond Surveillance Capitalism to create a decentralised, interoperable, free and open, sustainable Internet of People to guarantee individual sovereignty and a healthy commons for EU citizens.

The neoliberal Silicon Valley model of mainstream digital technology is founded on the interplay between traditional capitalism and surveillance. We call this model Surveillance Capitalism. In this model, huge amounts of venture capital are used to fund startups that share a very specific business model: the surveillance-based monetisation and exploitation of the general public. 

Under Surveillance Capitalism, accrued capital funds systems that, in turn, result in the accrual of information (data) and insight within the same narrow set of hands. This, subsequently, leads to an even greater accrual of capital within those same hands in a feedback loop that accelerates exponentially driven by the exponential rate of change of technology. 

While most venture-capital funded startups fail and yet others are swallowed up by existing, larger players, the few that succeed become the platform monopolies that own and control the digital infrastructure of modern life. They become the Googles and the Facebooks and the Palantirs of the world.

Surveillance Capitalism has left us with a world of systemic inequality where eight men, four of whom are the founders of four of the largest of these technology companies – Microsoft, Facebook, Amazon, and Oracle – have between them as much wealth as half of the world’s population combined (3.5 billion people). Needless to say, systemic inequality isn’t sustainable. Nor is this feudalistic system compatible with human rights, human welfare, or democracy. Furthermore, the imperialistic ambitions of Silicon Valley are a threat not only to our individual sovereignty as human beings but also the sovereignties of the nations within Europe as well as that of the EU as a whole.

We call the business model of surveillance-based monetisation of the general public people farming. When a people farmer like Google or Facebook designs its products, it does so for two very different audiences – their users and their customers – and with two very different sets of goals.

The goal of a people farmer is to attract, addict, farm, exploit, and manipulate their users for the profit and political motives of both itself and its customers. The users usually receive the machinery that farms them for free (or at a subsidised price) whereas the customers are the entities – usually corporate – that actually pay the people farmer for its services. In this model, the general public is reduced to the role of livestock. If this is truly the Fourth Industrial Revolution, then human beings are today what trees were in the first and second industrial revolutions: raw materials to be exploited.

We must stress, again, that any system that treats human beings as natural resources to be mined and farmed is incompatible with human rights, human welfare, and democracy. As the United States and much of the rest of the world has accepted this status quo as fate accompli, it is up to us, starting in Europe, to resist, contain, and create an ethical alternative to this toxic Silicon Valley model of human exploitation. Instead of an Internet of Things, we must create an Internet of People: decentralised, interoperable, free and open, sustainable technological infrastructure that affords the citizens of Europe (and beyond) individual sovereignty and a healthy digital commons.

We cannot democratise Europe without democratic technological infrastructure. In the words of Audre Lorde, ‘the master’s tools will never dismantle the master’s house’. It is precisely due to this inextricable relationship between the topology of society and the topology of technology in the digital age that this progressive technology policy is the 7th pillar of the DiEM25 European New Deal as well as a cross-cutting concern across the other six piers.
 
SebastianEis
Posts: 16
Joined: 09 Apr 2017 12:37
Location: Berlin

Re: Introducing the problem

17 Jul 2017 22:37

Thanx a lot Aral for your concise texts and for your recommendations! I go along with you in most points, but I have a question regarding the metaphor of "people farming" and the comparison to the exploitation of trees in the industrial age. How are my data exploited and what is done with them exactly? I think we have to be very precise and concrete on that to be able to convince people of the importance of this question. As you know, most people just don't care what happens to their data. Why is that? Considering that I think it is not helpful to use drastic metaphors that most people can't relate to, without showing very detailed how and what for the data are used.
I know a little bit of how recommender systems work and how they use Big Data to get to more personalized recommendations and advertising. Another point would be how bots try to influence decisions in elections. Are there more examples?
I hope I could make my question clear.
Thanx again and carpe DiEM!
Sebastian
 
User avatar
aral
Topic Author
Posts: 38
Joined: 19 Jun 2017 17:20

Re: Introducing the problem

23 Jul 2017 10:02

Hey Sebastian,

SebastianEis wrote:
I have a question regarding the metaphor of "people farming" and the comparison to the exploitation of trees in the industrial age. How are my data exploited and what is done with them exactly?

The metaphor of people farming isn’t so much a metaphor as a description of the core business model of almost all mainstream technology that is funded by venture capital and follows the Silicon Valley model. These businesses are literally based on collecting as much data about you as possible. In other words, farming you for data. In order to do this, they need to attract you willingly to be farmed. This is where the features of their apps come into play. They are designed to be useful to you (Gmail provides you with “free” email) so that you will volunteer information about yourself. Of course, this isn’t even entirely true – these same companies also purchase information about you from all sorts of data brokers and even your own municipality (see Amsterdam giving realtime license plate information to Google in exchange for a “free” service that shows people if there are free parking spaces in town) and your government (see Italy selling IBM the full medical records of its population). These services are also designed to be addictive (see the bestselling book by Nir Eyal called Hooked: how to build addictive user experiences – this is not a warning, it’s an instruction manual) so that you can be farmed as robustly as possible. And, again, this is no longer necessarily as choice either, with workspaces – for example – setting the use of fitness trackers as a prerequisite to insurance coverage.

As for what all this data can (and is) used for, there is – of course – the default capitalist answer that they are used to exploit your behaviour to benefit the bottom lines of these corporations. While this is true, it is only half of the answer. These multibillion-dollar corporations are not political neutral – they have political needs. Most specifically, they need to be free of all regulations. So they can and do use their intimate insight into your behaviour to influence your political decisions. One firm, Cambridge Analytica, has been hugely successful in effecting the outcomes of both Brexit and the US presidential elections. Carole Cadwalladr has in-depth and informative articles in The Guardian on that specific matter.

SebastianEis wrote:
I think we have to be very precise and concrete on that to be able to convince people of the importance of this question. As you know, most people just don't care what happens to their data. Why is that?

A number of reasons:

1. It’s not that they don’t care; they are being lied to with billion-dollar budgets. When Google and Facebook engage in their PR and marketing efforts, they don’t do so to inform the public about how they actually make money. They focus on the features they provide – on the illusion of the interface – not the reality of their business model.

2. It’s a position of privilege: gay people in countries where they could be deprived of their freedom (or their lives) for being gay do care that Facebook knows they’re gay from just their likes alone (and that this insight could be accessed by their government). It’s usually middle-class white people in the West who “don’t care” and “have nothing to hide”. At least, that is, until they next apply for insurance and their premiums are sky high because their smart fridge has been snitching on their dietary behaviour.

SebastianEis wrote:
Considering that I think it is not helpful to use drastic metaphors that most people can't relate to


Again, “people farming” is a descriptive label. It might have been considered drastic when I first started speaking about it four years ago but today, it has been accepted as the editorial stance of a major mainstream newspaper towards Silicon Valley data giants (The Guardian). So we’re beyond that.

That said…

SebastianEis wrote:
without showing very detailed how and what for the data are used.


This is hugely important and we must, of course, document those instances where we have this knowledge.

Unfortunately, one of our challenges is that we don’t know the extent to which the data is used because the algorithms of companies like Google and Facebook are secret. This is why one of our core policy areas must be to demand algorithmic transparency. (Read Weapons of Math Destruction by Cathy O’Neil for lots of real-world examples of algorithmic bias, etc.) All we know today is that they can use the data for any purpose they can conceivably imagine both today and in the future.

SebastianEis wrote:
Thanx again and carpe DiEM!


Thank you, Sebastian – this is exactly the sort of feedback we need. I hope I’ve been able to expand a little on the subject and we will continue to do so both here and in the policy document itself.

Carpe DiEM :)
 
User avatar
AlistairConnor
Posts: 4
Joined: 01 Aug 2016 15:25
Location: Lyon (Rhone Alpes Auvergne)
Contact:

Re: Introducing the problem

26 Jul 2017 11:19

Hi Sebastian
Personally I'm finding that people (ordinary, non-tech, even non-political) are starting to feel concerned about privacy and the internet. The key meme " if it's free, then you are the product" really resonates for many. Many others will respond to a bit of prompting, to formulate latent concerns that they already have.
A useful analogy is retailers' fidelity cards. These are like physical cookies : they track your buying behaviour, and they often earn you junk mail in your letterbox. But they are also supposed to win you certain rewards. And unlike a cookie, you are aware you have it, and you chose to have it.

Sure, there are people who absolutely don't give a thought to the privacy/data issue. Others have thought about it, and don't care. But most people have thought about it, more or less, and decided, at least implicitly, that they are prepared to accept giving away their data in order to get free stuff.

Our policy has to point out that there is no free lunch. If we want these services that we can no longer live without, how are we going to pay for them?
It has to be about changing the economic model so that these services can be provided as public infrastructure, paid for by our taxes (and by taxes on the big Silicon Valley outfits. That would be fitting.)