The speakers notes for a 5 minute talk given at the Data Justice Lab in Cardiff, June 7th 2019.
1. not transformation but intensification
The introduction of process automation and predictive analytics via machine learning
is not a transformation,
it's an intensification.
Machine learning and bureaucracy are both generalisable modes of rational ordering
based on abstraction and deriving authority from claims to neutrality and objectivity.
The justification for bureaucratic rationality is efficiency.
Machine learning adds inferential governance in the name of optimisation.
Efficiency is already questionable
as it's only calculable after a reductive rendering of its social objects,
while optimisation overrides social complexity through its objective function.
2. it's about risk instead of changing things
Seeing society as categories of actuarial risk
is to filter people's lives through the epistemology of insurance and instrumentalism.
Machine learning achieves its insights through discriminating between its classes
via an abstract distance in data space; it's a logic of statistical segregation.
Applied to social welfare it becomes calculative Victorianism,
assigning morality via metrics of 'deservingness'.
It's an ethics of triage via the computerisation of stigma.
Machine learning extends bureaucracy into the future;
or rather, it bureaucratises a probabilistic future and actualises it in the present.
Risk is remodelled as a dynamic phenomenon open to 'nudging',
yet correlations are not causation and tell us little about the best ways to intervene.
By only selecting for features that differentiate between individuals,
we are bracketing out the problems people have in common.
The goal is targeting instead of raising up whole populations.
3. the collateral damage
The collateral damage of this intensification includes
- an erosion of due process through opacity
- an amplification of thoughtlessness, in the sense that Hannah Arendt meant it
- the production of epistemic injustice, where calculations count more than testimony
- an asymmetric focus on those about whom civic data is already most plentiful
- the multiplication of categories that increase potential moments for administrative violence
and new opportunities for institutional gaming.
4. reforms are no solution
A human-in-the-loop is not a humanistic pushback
as that human is themselves subsumed by the institution-in-the-loop.
While in the private sector and across government,
ethics washing has become a form of institutional hydropower.
Privacy is hard to enforce when you've built a proxying machine
and data sharing is the dominant mode of value extraction,
so people are finally starting to call for regulation and law,
although this often seems to assume that society
is a level playing field that simply needs better fences.
In any case, more regulation
means more bureaucracy or more machine learning to monitor it
making the zweckrational, as Weber called it, recursive.
5. people's councils
People’s councils, on the other hand, are face-to-face democratic assemblies;
a horizontally organised refusal to be rendered as data dividuals.
They are a collective questioning
of the decisions that define the way the machines will make decisions,
by applying critical pedagogy and situated knowledge.
They constitute a different subjectivity -
iterative deliberation of consensus, done right,
is an antidote to bureaucracy and to the calculative iterations of machine learning.
People's councils apply Bergson's critique of ready-made problems,
reversing statistical reductiveness through
a commitment to the possible over the probable.
Like Ivan Illich, they value a convivial technology
and are prepared to apply limits.
We need to develop a different order of ordering.
Instead of ways of organising that allow everyone to evade responsibility,
we need to reclaim our own agency through self-organisation.
Only under these conditions will we discover whether a re-imagined machine learning
can become a people's technology.
6. AI Realism
Mark Fisher coined the term Capitalist Realism
to describe the entrenched belief that despite the global financial crash
there is no alternative.
What we're seeing now is AI realism.
While the reform of AI is endlessly discussed,
there is no attempt to seriously question whether we should be using it at all.
But rather than a sci-fi future,
we are to be left behind in computationally-optimised deprivation.
We need to think collectively about ways out of this mess,
learning from and with each other rather than relying on machine learning.
countering thoughtlessness with practices of collective care.
We can't uninvent either AI or bureaucracy,
but we can choose to radically change both our modes of organisation
and our approach to computational learning.
7. agile populism
As a warning footnote,
the simplification of social problems to optimisation
based on reductionive reasoning and innate characteristics
is the politics of populism.
What institutional machine learning risks creating by default is
a machine for the agile construction of populist targets.
AI realism is only one step from the analytics of the far right.