.

 

Is the Happiness Industry Creating Algorithmic Selves?

In a recent podcast called “Thinking Allowed,” host Laurie Taylor covered two fascinating books: The Wellness Syndrome, and The Happiness Industry. One author discussed a hedge fund that’s now managing what it calls “biorisk” by correlating traders’ eating, drinking, and sleeping habits, and their earnings for the firm. Will Davies, author of The Happiness Industry, discussed less intrusive, but more pervasive, efforts to assure that workers are fitter, happier, and therefore more productive. As he argues in the book,

[M]ood-tracking technologies, sentiment analysis algorithms and stress-busting meditation techniques are put to work in the service of certain political and economic interests. They are not simply gifted to us for our own Aristotelian flourishing. Positive psychology, which repeats the mantra that happiness is a personal ‘choice’, is as a result largely unable to provide the exit from consumerism and egocentricity that its gurus sense many people are seeking.

But this is only one element in the critique to be developed here. One of the ways in which happiness science operates ideologically is to present itself as radically new, ushering in a fresh start, through which the pains, politics and contradictions of the past can be overcome. In the early twenty-first century, the vehicle for this promise is the brain. ‘In the past, we had no clue about what made people happy – but now we know’, is how the offer is made. A hard science of subjective affect is available to us, which we would be crazy not to put to work via management, medicine, self-help, marketing and behaviour change policies.

The happiness industry thrives in a culture premised on an algorithmic model of the self. People (or “econs“) are seen a bundle of inputs (data collection), algorithmic processes (data analysis), and outputs (data use). Since the demands of affect can only be extirpated in robots, the challenge for the happiness industry is to optimize some quantum of satisfaction for its human subjects, compatible with their maximum productivity. Objectively, the algorithmic self is no more (nor less) than the goods and services it uses and creates; subjectively, it strives to convert inputs of resources into outputs of joy, contentment–name your positive affect. As “human resources,” it is simply raw material to be deployed to its most profitable use.

Audit culture, quantification (e.g., the quantified self), commensuration, and cost-benefit analysis all reflect and reinforce algorithmic selfhood. Both the Templeton Foundation and the Social Brain Centre in Britain are developing some intriguingly countercultural alternatives to big data-driven behaviorism. As he highlights the need for such alternatives, Davies deserves great credit for exposing the political economy behind corporate appropriations of positive psychology.

 

ContentIssues

Work Commerce

Follow @DataJustice1 on Twitter