Data for the People – Intro/Chapter 1

Introduction/Chapter 1

Data for the People by Andreas Weigend begins with a personal story of the Stasi, political prisoners, and the lengths to which governments had to pursue to gather data on a single person. Weigend presents this backdrop to provide us a clear counterbalance to today’s world, where passive surveillance and data collection is ever present. Where we exist as mere catalog entries, anonymized or not, within data refinery databases. Our data is taken and there is no escape, a grim reality. A different perspective is given to us by Weigend, we must be willing to leverage our social data to gain value and re-balance the structure of power into our own hands.

Weigend identifies two key principles which are required to do this within our unbalanced legal structures. The first is transparency or the right to know about our data and its uses. The second is agency or the right to act upon data by creating data useful to us and accessing data on our own terms. These principles are required to adequately gain value from our social data. These two key principles present six basic data actions: access, inspection, amending, blurring, experimenting, and porting our data to change the current relationship with our data.

A central skill to re-balance the data power structure is that of data literacy. We must be able to navigate the world as presented to us by data refineries, such as, Amazon, Acxiom, or Google. A basic understanding is required to derive meaningful value from our data and the relational data computed from our digital traces. The tenets of data literacy require us to be able to understand the possibilities of our data, the plausible, implausible, and impossible. Furthermore, the techniques within data literacy provide us with nuanced methods to operate in the world of data refineries. First, we must understand whether our data can be deleted or has been diffused into aggregated data. Weigend makes a point to counter finance and micro-transactions as means for deriving value from our data, due to the order of magnitude for payments, the opt in/out paradox, and that data refinery outputs are refined from our raw data.

Our raw data is refined by data refineries and make tradeoffs between user time and user effort balancing exploration and exploitation of data. Where we should expect transparency in data refinery settings and the agency to affect settings. This brings us to our next data literacy technique; we must understand recommendations are merely a likelihood of what we need. We must be able to analyze data history, patterns, and trends of data refinery recommendations.

The ability to understand these recommendations requires users to understand how data presents the past (description), how to extrapolate the past into the present (prediction), and how create the desired outcome (prescription). Our next data literacy technique becomes the ability to accurately; understand assumptions within descriptions, realize the uncertainty in predictions, and understand the feedback required for prescription.

Finally, Weigend advises use to understand how we are experimented on via A/B tests to provide us or drive us to the necessary a certain prescription. Here we are presented with a categorization of our social data: clicks, connections, and context.

- Avatar

Posted by