Algorithmic Humanitarianism

The Manifesto on Algorithmic Humanitarianism was presented at the symposium on 'Reimagining Digital Humanitarianism', Goldsmiths, University of London, Feb 16th 2018.

Download at SocArXiv

intro

  1. humanitarian organisations will adopt ai because it seems able to answer questions at the heart of humanitarianism
  2. such as 'who should we save?' and 'how can we be effective at scale?'
  3. it resonates strongly with existing modes of humanitarian thinking and doing
  4. in particular the principles of neutrality and universality

  5. the way machine learning consumes big data and produces predictions

  6. suggests it can both grasp the enormity of the humanitarian challenge and provide a data-driven response
  7. but the nature of machine learning operations mean they will actually deepen some humanitarian problematics
  8. and introduce new ones of their own
  9. thinking about how to avoid this raises wider questions about emancipatory technics
  10. and what else needs to be in place to produce machine learning for the people

maths

  1. there is no intelligence in artificial intelligence
  2. nor does it really learn, even though it's technical name is machine learning
  3. it is simply mathematical minimisation
  4. like at school, fitting a straight line to a set of points

  5. you pick the line that minimises the differences overall

  6. machine learning does the same for complex patterns
  7. it fits input features to known outcomes by minimising a cost function
  8. the fit is a model that can be applied to new data to predict the outcome
  9. the most influential class of machine learning algorithms are neural networks
  10. which is what startups call 'deep learning'
  11. they use backpropagation: a minimisation algorithm that produces weights in different layers of neurons
  12. anything that can be reduced to numbers and tagged with an outcome can be used to train a model
  13. the equations don't know or care if the numbers represent amazon sales or earthquake victims
  14. this banality of machine learning is also it's power
  15. it's a generalised numerical compression of questions that matter
  16. there is no comprehensions within the computation
  17. the patterns are correlation not causation
  18. the only intelligence comes in the same sense as military intelligence; that is, targeting
  19. but models produced by machine learning can be hard to reverse into human reasoning
  20. why did it pick this person as a bad parole risk? what does that pattern of weights in the 3rd layer represent? we can't necessarily say.

reasoning

  1. machine learning doesn't just make decisions without giving reasons, it modifies our very idea of reason
  2. that is, it changes what is knowable and what is understood as real
  3. it operationalises the two-world metaphysics of neoplatonism
  4. that behind the world of the sensible is the world of the form or the idea.
  5. a belief in hidden layer of reality which is ontologically superior,
  6. expressed mathematically and apprehended by going against direct experience.
  7. machine learning is not just a method but a machinic philosophy

  8. what might this mean for the future field of humanitarian ai?

  9. it makes machine learning prone to what miranda fricker calls epistemic injustice
  10. she meant the social prejudice that undermines a speaker's word
  11. but in this case it's the calculations of data science that can end up counting more than testimony
  12. the production of opaque predictions with calculative authority
  13. will deepen the self-referential nature of the humanitarian field
  14. while providing a gloss of grounded and testable interventions
  15. testing against unused data will produce hard numbers for accuracy and error
  16. while making the reasoning behind them inaccessible to debate or questioning
  17. using neural networks will align with the output driven focus of the logframe
  18. while deepening the disconnect between outputs and wider values
  19. hannah arendt said many years ago that cycles of social reproduction have the character of automatism.
  20. the general threat of ai, in humanitarianism and elsewhere, is not the substitution of humans by machines but the computational extension of existing social automatism

production

  1. of course the humanitarian field is not naive about the perils of datafication
  2. we all know machine learning could propagate discrimination because it learns from social data
  3. humanitarian institutions will be more careful than most to ensure all possible safeguards against biased training data
  4. but the deeper effect of machine learning is to produce new subjects and to act on them
  5. machine learning is performative, in the sense that reiterative statements produce the phenomena they regulate

  6. humanitarian ai will optimise the impact of limited resources applied to nearly limitless need

  7. by constructing populations that fit the needs of humanitarian organisations
  8. this is machine learning as biopower
  9. it's predictive power will hold out the promise of saving lives
  10. producing a shift to preemption
  11. but this is effect without cause
  12. the foreclosure of futures on the basis of correlation rather than causation
  13. it constructs risk in the same way that twitter determines trending topics
  14. the result will be algorithmic states of exception
  15. according to agamben, the signature of a state of exception is ‘force-of’
  16. actions that have the force of law even when not of the law
  17. logistic regression and neural networks generate mathematical boundaries
  18. but cybernetic exclusions will have effective force by allocating and witholding resources
  19. a process that can't be humanised by having a humanitarian-in-the-loop
  20. because it is already a technics, a co-constituting of the human and the technical

decolonial

  1. the capture, model and preempt cycle of machine learning will amplify the colonial aspects of humanitarianism
  2. unless we can develop a decolonial approach to its assertions of objectivity, neutrality and universality
  3. we can look to standpoint theory, a feminist and post-colonial approach to science
  4. which suggests that positions of social and political disadvantage can become sites of analytical advantage
  5. this is where our thinking about machine learning & ai should start from

  6. but i don't mean by soliciting feedback from humanitarian beneficiaries

  7. participation and feedback is already a form of socialising subjects
  8. and with algorithmic humanitarianism every client interaction will be subsumed into training data
  9. they used to say 'if the product is free, you are the product'
  10. but now, if the product is free, you are the training data
  11. training for humanitarian ai and for the wider cybernetic governance of resilient populations
  12. machine learning can break out of this spiral through situated knowledge
  13. as proposed by donna haraway as a counterweight to the scientific ‘view from nowhere’,
  14. a situated approach that is not optional in its commitment to a particular context
  15. how does machine learning look from the standpoint of Haiti's post-earthquake rubble or from an IDP camp
  16. no refugee in a freezing factory near the serbian border with croatia is going to be signing up for andrew ng's mooc on machine learning any time soon
  17. how can democratic technics be grounded in the humanitarian context?

people's councils

  1. it may seem obvious that if machine learning can optimise ocado deliveries then it can help with humanitarian aid
  2. but the politics of machine learning are processes operating at the level of the pre-social
  3. one way to counter this is through popular assemblies and people's councils

  4. bottom-up, confederated structures that implement direct democracy

  5. replacing the absence of a subject in the algorithms with face-to-face presence
  6. contesting the opacity of parallel computation with open argument
  7. and the environmentality of algorithms with direct action
  8. the role of people's councils is not to debate for its own sake
  9. but the creation of alternative structures, in the spirit of gustav landauer's structural renewal
  10. an emancipatory technics is one that co-constitutes active agents and their infrastructures
  11. as Landauer said, people must 'grow into a framework, a sense of belonging, a body with countless organs and sections'
  12. as evidenced in calais, where people collectively organised wharehouse space, van deliveries and cauldrons to cook for 100s, while regularly tasting tear gas
  13. i suggest that solidarity is an ontological category prior to subject formation
  14. collective activity is the line of flight from a technological capture that extends market relations to our intentions
  15. it is a politics of becoming - a means without end to counter ai's effect without cause

close

  1. in conclusion
  2. as things stand, machine learning and so-called ai will not be any kind of salvation for humanitarianism
  3. but will deepen the neocolonial and neoliberal dynamics of humanitarian institutions
  4. but no apparatus is a closed teleological system; the impact of machine learning is contingent and can be changed
  5. it's not a question people versus machines but of a humanitarian technics of mutual aid
  6. in my opinion this requires a rupture with current instantiations of machine learning
  7. a break with the established order of things of the kind that badiou refers to as an event
  8. the unpredictable point of excess that makes a new truth discernible

  9. and constitutes the subjects that can pursue that new truth procedure

  10. the prerequisites will be to have a standpoint, to be situated, and to be committed
  11. it will be as different to the operations of google as the balkan aid convoys of the 1990s were to the work of the icrc
  12. on the other hand, if an alternative technics is not mobilised,
  13. the next generation of humanitarian scandals will be driven by ai
permalink ...

Citizen Science as Political Consciousness

What is the relationship between citizen science and change? Engaging communities in the practice of science, especially through crowdsourcing at scale, is seen as a step towards tackling problems like global warming. The narrative is that collective measurement leads to collective awareness, which leads to collective action. However, the scientistic culture encouraged in most citizen science undermines the political determination needed to make changes in the world. It would be more productive to see citizen science as also a practice of political consciousness.

The act of measurement doesn't change anything by itself. We can ask whether it increases the agency of the actors making the measurements, in a way that carries over to change-making. One problem is that many citizen science projects treat participants as simple data collectors, with the broader project being defined by professional scientists. This clearly doesn't empower the participants. However, even citizen science projects which are genuinely bottom up and community oriented are prone to being assimilated by the status quo. The bigger problem here is the character of science itself.

In our societies, science has become part of what Foucault would call 'a regime of truth'. Truths are not things that sit outside the world but are produced within the historical process. They are shaped by the distribution of power in the society that produces them, and in turn help to channel how power can be exercised.

"Truth is a thing of this world: it is produced only by virtue of multiple forms of constraint. And it induces regular effects of power. Each society has its regime of truth, its “general politics” of truth: that is, the types of discourse which it accepts and makes function as true; the mechanisms and instances which enable one to distinguish true and false statements, the means by which each is sanctioned; the techniques and procedures accorded value in the acquisition of truth; the status of those who are charged with saying what counts as true". Michel Foucault, Truth and Power

Of course, what makes science particular is that it is not wholly determined by social discourse or political prejudice. It is a set of empirical practices deployed against a material world which has regular and reproducible patterns. Many scientists would see this as sufficient to make science a neutral truth machine. However, it is well understood by those viewing science from a feminist or post-colonial perspective that this hasn't saved science from being enrolled in the dominant world view.

Yes, the scientific method provides a mechanism for guarding against individual whim. But in the choice of questions it asks, and the metaphors it deploys to understand the world, it is not immune to political and cultural influence. Scientific culture encourages a questioning, empirical approach to the world but is at the same time part of what Gramsci called the 'cultural hegemony'. That is, the set of beliefs, perceptions, and values that lead us to experience the fundamental tenets of the status quo as natural and inevitable. If citizen science is to make an authentic break with this, it needs to incorporate a political consciousness.

The concept of political consciousness starts from the idea that our understanding of power, wealth and self are shaped by ideological forces which hinder a full understanding of the forces at work in society. To open up the possibility of change means no longer perceiving these beliefs as natural or inevitable but as social constructs that must be investigated to reveal their role in perpetuating forms of domination. As a means of cracking open the cultural hegemony, it has been vital to feminism and the black liberation movements.

I think that citizen science can be seen to act in a similar way. By investigating the state of the world around us, it can be part of revealing that seemingly natural and inevitable parts of our lives are in fact constructed by wider processes and can therefore be reconstructed. Take, for example, community-driven investigations of air quality. The process of measuring and understanding the presence of particulate matter connects us with the material-social politics of the air. We develop an understanding of why the different forms of PM2.5 are present, what effects they are having on us, how they are connected to the wider political economy, and what it would take to change that.

The problem is that this consciousness can be hindered by science as much as it is enabled by it. The dominant tendency is for citizen science to define itself and justify itself in relation to mainstream science. Yes, citizen science is happy to claim the mantle of community participation, but it sees orthodox science as the final arbiter of the 'mechanisms which enable one to distinguish true and false statements'. This hobbles the ability of citizen science to be also a political consciousness because it carries the assumption that existing science is both neutral and objective. The fear in citizen science, I believe, is that challenging this would betray the scientific aspect of the enterprise and turn citizen science in to just another form of political advocacy.

Luckily, there are already positions to adopt that promote empirical investigation without forcing an allegiance to the cultural hegemony masked by science's claim to absolute neutrality. These have been articulated by people like Sandra Harding under the heading of 'standpoint epistemology'. We can also draw on the writings of Donna Haraway, who contests the idea that authentic objectivity comes from a process of disembodiment, of removing the perspective of those doing the measuring. They would argue that objectivity comes from acknowledging the position from which things are measured.

This is not social constructivism, it is not claiming that a specific empirical measurement would be different for two people. It is not denying the existence of an intransigent material reality. Rather it is clarifying that all attempts to bring forth wider meanings cannot be separated from the embodied perspective of the actors constructing those meanings. Standpoint epistemology suggests positions of political disadvantage can be turned in to sites of analytical advantage because they can critique the hegemonic assumptions missed by prevailing forms of objectivity.

If citizen science is going to lead to changes in the world it needs to step out of the shadow of mainstream science and affirm it's potential as both science and political consciousness. Science long ago dropped any claim to have a say in how society should be ordered. It outsourced this to politics in exchange for an elevated status in society, as a producer of pure truth in an otherwise impure world. Unfortunately, like most cases of 'not taking sides' this had the net effect of siding with the powerful. Citizen science can rightfully celebrate a practice of science that takes sides. It's not about overthrowing science as such, but practising science with a standpoint.

"Its not a matter of emancipating truth from every system of power (which would be a chimera, for truth is already power), but of detaching the power of truth from the forms of hegemony, social, economic and cultural within which it operates at the present time." Michel Foucault, Truth and Power

"A standpoint is not the same as a viewpoint or a perspective, for it requires both science and a political struggle." Sandra Harding, Is Science Multicultural?: Postcolonialisms, Feminisms, and Epistemologies

permalink ...

Science for Change Kosovo Year 1

[This reflection on the first year of Science for Change Kosovo was commissioned by Datashift and released under a CC-BY-SA license.]

Science for Change Kosovo (SfCK) is a radical citizen science project. I will try to explain what our kind of citizen science is and why it's important, and why we decided to do citizen science in Kosovo. I will also talk about our preliminary results, our plans for the next year or so, and what I think the wider implications are for data, communities and democracy.

what is citizen science?

There are many forms of citizen science; in most, the participants are simply collecting data or completing tasks for an experiment designed by and for scientists. I say that we're a radical project because we believe people from the communities should be involved at every stage, from framing the research questions to designing the data collection, analysing the data and interpreting the results. This makes us less like mainstream citizen science and more like the Public Lab's idea of civic science (‘Public Lab’). We are also inspired by the idea of environmental justice; the recognition that the impact of pollution is often worse for people who are already disadvantaged by society, which goes with a commitment to supporting them to do something about it. An environmental justice project which acts as a model for Science for Change Kosovo is Global Community Monitor (‘Global Community Monitor’), who train and support disempowered "fenceline" communities harmed by serious air pollution from industrial sources and whose concerns agencies and responsible corporations are ignoring. Our ethos, like Community Based Auditing (Tattersall) in Tasmania, is to be an experiential way for citizens to undertake their own disciplined inquiry into environmental issues affecting them, so that they can assert their rights and obligations as generators of valid knowledge and as agents of change.

why does it matter?

But why should anyone take any notice of bottom-up citizen science? How can it compete in any way with the sophisticated equipment of professional scientists, not to mention their years of training? In practical terms, citizen science can fill critical gaps in knowledge. Official air quality data is often sparse, coming from a limited number of fixed monitoring sites, and has to use mathematical models to fill in the gaps. Statutory data gathering is all about averages; it can't get down to the level of everyday lives and doesn't record the variable exposures of different people, the effects of their choices, or the related impacts on their health. Institutional science also lacks citizen participation and accountability and struggles to work with local knowledge and soft data. It loses out on useful insights, relying on objectivity and distance; a stance which gave science authority years ago but more and more leads to mistrust. At a higher level, the concept of post-normal science (Funtowicz and Ravetz) suggests that orthodox scientific method is badly adapted to situations that combine high risk with high uncertainty (such as climate change) and proposes it be enhanced by an 'extended peer review' that includes all those affected by an issue. As we'll see below, Science for Change Kosovo has already started to fill some of the local gaps in data, and is trying to put in to practice the idea of a participatory peer review.

why Kosovo?

Why, though, did we decide to try citizen science in Kosovo? There are certainly easier places to start, especially in terms of available resources, and Kosovo is faced by many other equally pressing challenges. One reason is that we were already in Kosovo, doing social innovation hackathons with the young people based on the Social Innovation Camp (‘Social Innovation Camp Kosovo’) model. These camps took back-of-the-envelope ideas for digital social change and turned them in to working prototypes over the course of a weekend. The camps were hosted by the UNICEF Innovations Lab, the young participants came from the local Peer Educators Network, and many of the coders were part of Free Libre Open Source Software Kosovo. To understand Science for Change Kosovo it's important to know that we had already built credibility on the ground, brought young people in to contact with social empowerment through DIY tools, and had working partnerships with these key local groups.

Another reason for starting citizen in Kosovo is that it's a very polluted country. The ageing lignite power plants are a major source of NO2 (nitrogen dioxide), SO2 (sulphur dioxide) and particulates (dust), and one of the power stations blew up just as we started the project (Bytyci). Some daily pollutant levels exceed EU and World Health Organisation limits by several times. Kosovo's own environmental protection agency says that current data on air quality levels is poor and incomplete, and there's a lack of capacity for environmental protection at a local level. In a poor country, it's hard even for statutory agencies to get the budget for maintenance and training, and all agencies have to deal with basic problems like power cuts. Air pollution in Kosovo causes hundreds of premature deaths and thousands of emergency hospital visits each year due to respiratory tract infections. Among the countries of the region, Kosovo has the worst health outcomes, ranking behind the rest of the region, in some cases dramatically, on indicators such as life expectancy, maternal death rates or infant and child mortality.

The constitution recognizes environmental protection as one of the principles on which the Republic of Kosovo is based. The Law on Air Protection (no. 2004/30) assigns responsibility for air quality and emissions indicators and sets obligations for protection. So there's a legal framework for accountability which we work with. Kosovo also has a long-standing political aspiration to join the EU; making substantial efforts to tackle pollution will be a condition for accession. Signing up to EU principles also means signing up to measures like the Aarhus Convention which sets out rights to environmental information and to participate in environmental decisions.

The focus of Science for Change Kosovo is young people. Although half the population is under 25, their current participation in decision-making at all levels is limited. The project appeals to those young people who are hungry for change and who are enthusiastic about the potential for participatory tech innovation. Because of our environmental justice principles we are also trying to involve marginalised communities, which in Kosovo includes Roma, Ashkali and Egyptians. There's a Roma community living in Plemetina, right at the base of the Kosova A and B power stations which are the core of Kosovo's air pollution, and one of our key contacts for year 1 of Science for Change was a Roma youth activist living in this community.

year 1

Our project began in June 2014 in Prishtina with a weekend co-design event at the Unicef Innovations Lab in Prishtina. Participants included young people from several parts of Kosova that had experienced severe environmental issues, including Plementina (under the polluting power stations), Prishtina (the capital city, downwind of the power stations and with heavy traffic pollution) and Drenas (near the Ferronickel plant). The participants shared experiences of pollution and quizzed experts on an environmental health panel. There were sessions on methods for air quality measurement, such as diffusion tubes, and we had a member of the Smart Citizen Kit team who introduced their Arduino-based citizen sensing device (‘Smart Citizen’) and trained the young people on how to install it and how to connect it to the online data platform. We discussed the way that science doesn't always have the kind of certainty about environmental impacts that it claims in in public, that there are a lot of disagreements inside science and a lot of arguments about what data is valid and what isn't. Participants planned for campaigning, drawing on global examples like the Arab uprisings and local examples like the student direct action that removed a corrupt Rector from the University of Prishtina. By the end of the weekend the action groups had agreed on a plan for air quality monitoring in Prishtina, Plemetina and Drenas.

One key thing we'd understood from researching other DIY air quality sensing projects is the importance of calibration. While sounding like a pretty unexciting notion, this is one of the main discontinuities between the smoothness of data projects and the awkwardness of material reality. So much of what passes as data journalism and data visualisation takes its data from existing sources, whereas a citizen science project is generating data from interactions with the physical world. And if this data is to be at all meaningful there should be a way to tie it to the material; to set a baseline that's been verified in the lab and in the field. Although the Smart Citizen Kits represented the positive maker-movement trend to open hardware sensing, they were a concern for us because they came without any calibration. So we decided to co-locate our kits with diffusion tubes, and use the fact that both measured NO2 to calibrate SCK data against the the tube analysis from the lab.

The field mobilisation of Science for Change Kosovo was, frankly, impressive. Kosovo can be a very frustrating place, where post-Communist institutional inefficiency overlaps with entrepreneurial corruption, and it's hard to get things done if you don't know the right people. By contrast, the young people in the project self-organised with the help of the UNICEF lab, installing Smart Citizen Kits in houses in each location (‘Deployment of Smart Citizen Kits’) and installing and collecting diffusion tubes ( (‘Deployment of Tubes in Drenas’) over three monthly cycles of data gathering. This included a large group of young Roma who installed and monitored the collection devices in Plemetina (‘Deployment of Tubes in Plemetina’).

Unfortunately, the data from the Smart Citizen Kits was difficult to use. While the datasheets for the sensors on the boards showed a linear log-log correlation between gas levels and the signal, in practice there were severe spikes in the data which blew holes in our ability to compare an averaged reading with the tubes. This was a real shame because the kits were our route to live data; emitting sensor readings every second, they were connected to the net and held out the prospect of a live pollution map, not to mention ideas around live campaigning (eg. triggering tweets to members of parliament each time the pollution exceeded EU levels). On the other hand we had significant readings from some of the NO2 tubes in the capital city, Prishtina. Through the dedicated field work of the volunteers, it looks like we've identified local hotspots which were missed by the government's data and which exceeed statutory limits by a large margin. We will be following these up in the next phase of data gathering.

next steps

Our next goal is to expand our measurement activities to particulates i.e. very small dust particles, which are categorised as PM10 (under 10 microns diameter) and PM 2.5 (under 2.5 microns diameter). Particulates are a form of pollution where the link to serious and deadly health problems is absolutely unambiguous. We have acquired a semi-professional portable detector called the TSI Sidepak which will enable us to take readings at different locations and also on the move. In this way we'll be able to compare the exposure of different activities e.g. driving / walking / cycling, and the way this varies over the course of daily life for different groups of people.

Working with the Sidepak detector to make localised and journey-based PM measurements will enable us to test ameliorative measures, such as alternative walking routes that can reduce peoples' exposure. In doing this we're following the Breathe London project and the work of the air quality & health team at Kings College, London, who have been piloting alternative back street walking routes to school for children in areas of central London.

From December 2015 we'll be running junior citizen science workshops in high schools. Rather than the traditionalist pedagogy that's customary in Kosovo, these will be non-formal, experiential and practice-based workshops. The students will also get to experiment with with the Smart Citizen Kits, as a way to learn about open tech for environmental monitoring. They will deploy the Smart Citizen Kits for indoor air quality measurements in the schools.

data & change

This doesn't mean we're ignoring the need for wider campaigning. The current PM measurements in each locality are tied to holding 'Town Hall' meetings shortly afterwards as a way to inform and engage local people. We are exploring how our citizen science measurements can best be used for advocacy and campaigning, but this needs to be alert to the complexities of the local context. The data we generated from Drenas has already been used to report in the Parliamentary Commission, and the Ministry of Environment of Kosovo has initiated a court case against the heavy metal plant “Ferronikeli” in Drenas. However, these developments are heavily enmeshed in party political battles, which generates a certain amount of cynicism in everyone else. All institutions are perceived to be captured by narrow political interests, and NGOs are often seen as internationalised 'do nothings' who complain endlessly from the sidelines. We're hoping to learn from the EcoGuerilla campaign (‘Lëvizja Eco Guerilla’) from neighbouring Macedonia, where leaked information about pollution from a power plant led to mass protests against PM levels.

We are trying to avoid the attribution of agency to data, or an assumption that participation is the same as empowerment. Many data and open data projects in the wealthy West seem to assume that action will inevitably flow from aggregating data and visualising transparency. Other citizen sensing projects assume that participation of communities in gathering data will increase people's sense of responsibility and lead to the generation of solutions. But the idea that collective measurement leads to collective action seems questionable. In fact there may be a tendency for forms of governmentality like the smart city to re-constitute populations as having a duty to measure their environments, while at the same time producing a society that is, overall, less democratic.

It's the relationship between air quality and democracy which underlies Science for Change. Kosovo is democratically challenged, with different forms of corruption alongside political interference in knowledge production. Orthodox political processes are completely captured by elites and oligarchs. The older generation have a hold on power and it's very hard for young people with a more open, socially progressive outlook to make headway. The emphasis of Science for Change Kosovo is on practices, not simply on what is produced; on the construction of data and what that means about subjectivity and agency, not just on the data as such. The interesting thing about air quality is that it is also politics by other means. We know the air is political, that "the air’s chemical composition reveals a history and a politics in itself" (Nieuwenhuis). Most of the time we do not feel this with an intensity that leads to action. What is it in citizen science that causes an 'affective' response, which 'so amplifies our awareness of the injury which activates it that we are forced to be concerned, and concerned immediately'? (Tomkins and Demos) This is a question we hope that Science for Change Kosovo will help to answer over the next two years.

References:

Bytyci, Fatos. ‘At Least One Killed in Kosovo Power Plant Blast, Supplies Hit’. Yahoo News. N.p., 6 June 2014. Web. 26 Nov. 2015. http://news.yahoo.com/explosion-hits-kosovo-coal-fired-power-plant-injuries-095441081.html.

‘Deployment of Smart Citizen Kits | Facebook’. N.p., 2014. Web. 26 Nov. 2015. https://www.facebook.com/media/set/?set=a.1451488458465993.1073741832.1423629027918603&type=3.

‘Deployment of Tubes in Drenas | Facebook’. N.p., 2014. Web. 26 Nov. 2015. https://www.facebook.com/media/set/?set=a.1448248975456608.1073741831.1423629027918603&type=3.

‘Deployment of Tubes in Plemetina | Facebook’. N.p., 2014. Web. 26 Nov. 2015. https://www.facebook.com/media/set/?set=a.1440501616231344.1073741830.1423629027918603&type=3.

Funtowicz, Silvio O., and Jerome R. Ravetz. ‘Science for the Post-Normal Age’. Futures 25.7 (1993): 739–755. ScienceDirect. Web.

‘Global Community Monitor’. 2015. Web. 26 Nov. 2015. http://www.gcmonitor.org/.

‘Lëvizja Eco Guerilla’. 2015. Web. 26 Nov. 2015. http://ecoguerilla.mk/.

Nieuwenhuis, Marijn. ‘Atemwende, or How to Breathe Differently’. Dialogues in Human Geography March (2015): 90–94. Print.

‘Public Lab: About Public Lab’. N.p., 2015. Web. 26 Nov. 2015. https://publiclab.org/about.

‘Smart Citizen’. N.p., 2015. Web. 26 Nov. 2015. https://smartcitizen.me/kits/.

‘Social Innovation Camp Kosovo’. 2013. Web. 17 Aug. 2014. http://sicampkosovo.org/.

Tattersall, Philip J. ‘What Is Community Based Auditing and How Does It Work?’ Futures 42.5 (2010): 466–474. ScienceDirect. Web.

Tomkins, Silvan S., and E. Virginia Demos. Exploring Affect: The Selected Writings of Silvan S Tomkins. Cambridge University Press, 1995. Print.

permalink ...

Ghosts in the Algorithmic Resilience Machine

A talk for the panel on 'Resilience and the Future of Democracy in the Smart City' at the 25th anniversary conference of the Centre for the Study of Democracy, University of Westminster, 7th Nov 2015


I want to start by looking at what resilience and the smart city have in common. The idea of resilience comes from Holling's original 1973 paper on ecological systems. He was looking at the balance of predator and prey, and replaced the simple idea of dynamic equilibrium with abstract concepts draw from systems theory & cybernetics. Complex systems have multiple equilibriums, and movement between these is not a collapse of the system but rather an adaptive cycle. So the population of antelope drooping by 80% is not necessarily a catastrophe, but an adaptive shift. The system persists, although in a changed form.


What does this have to do with the smart city? In it's current incarnation, the smart city appears as pervasive computation in the urban fabric, driven by the twin goals of efficiency and environmental sustainability. It posits continuous adaptation through a cycle of sensing-computation-actuation. Heterogeneous data streams from sensors are processed in to a dashboard of metrics that triggers automated changes; so, for example, speed limits and traffic lights are manipulated to modify car emissions in near real-time. The new model of the smart city explicitly includes the participation of citizens as sensing nodes. Continuous adaptations are made to optimise flows with respect to the higher parameters of smoothness and greenness. The smart city is multi-dimensional complex system constantly moving between temporary states of equilibrium. It is a manifestation of high-frequency resilience.



But resilience means more than systems ecology. It has outgrown it's origins to become a governing idea in a time of permanent crisis. As a form of governmentality [] it constitutes us as resilient populations and demands adaptation to emergencies of whatever kind, whether it's finance, envirnonment or security. In practice, the main engine of resilience is through accelerated conversion of everything to Hayek's self-organising complexity of markets, with military intervention at the peripheries where this resisted.


If resilience is the mode of crisis governance and the smart city is a form of high-frequency resilience, what does the smart city mean for democracy? To understand the implications for the future of democracy, I want to look at the emerging mode of production through which both wider resilience and specifically the smart city are being produced; that is, through the algorithmic production of preemption.


We're all becoming familiar with the idea that contemporary life generates streams of big data that are drawn through the analytic sieve of datamining and machine learning. Meaning is assigned through finding clusters, correlations and anomalies that can be used to make predictions. While its original commercial application was to predict the next set of supermarket purchases, the potential for prediction has become addictive for sectors whose main focus is risk. While algorithmic preemption drives both high-frequency trading and drone strikes, it has also spread to the more mundane areas of everyday life.


In the same way that airline websites use your online data profile to tweak the ticket prices that you see, algorithmic prediction leads to preemptive interventions in social processes. One example is human resources departments, where it's used to predicts which employees will be the next to leave. Or in company health insurance, where staff wear Fitbits and pay insurance premiums based on predicted future health. In New Zealand, the government commissioned algorithms to predict which families are likely to abuse their children, based on data available in the first week after birth. And in some US states police stop and search is targeted by prediction software like PredPol.


This preemption forecloses possible futures in favour of the preferred outcome. The smart city will be a concentrated vessel for algorithmic preemption and, because of this, it will be a machine for disassembling due process.


This year in the UK there's been a big fuss about the 800th anniversary of the signing of the Magna Carta ('the Great Charter'). The principle of due process in law is expressed in Clause 39 of the Magna Carta: "No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any way, nor will we proceed with force against him, or send others to do so, except by the lawful judgment of his equals or by the law of the land."


But so much of this is potentially shredded by the smart city; the constant contact with algorithmic systems that can influence the friction or direction of our experience opens the space for prejudicial and discriminatory actions that escape oversight.



The characteristics of algorithmic preemption that disassemble due process include the high frequency and often invisible nature of the resilience adaptations. But also because, unlike science, algorithmic preemption make no claim to causal explanation. It simply predicts through patterns, and the derivation of those patterns through abstraction and parallel calculation at scale is opaque to human reasoning. Therefore the preemptions of big data are not understandable as intent nor accountable to 'the judgement of peers'.


Algorithmic productive force avoids causality, evades accountability, and restrict agency to participation and adaptation. To be honest, things are not looking good...


But general computation doesn't predetermine the kinds of patterns that are produced. The network protocols are open, and the ability to take advantage of code is not limited to the powerful. The question is, if there are other possibilities, how can we envision them? If enthusiastic communities participating in bottom up citizen sensing using accessible tech can be assimilated in to the resilience of the smart city, as they can, where do we look for forms of social recomposition that combine community and computation for a real alternative?


I think this is where the ghost of Gustav Landauer arises to guide us. His most famous dictum was first published in “Schwache Staatsmänner, Schwächeres Volk!” in 1910: “The State is a condition, a certain relationship between human beings, a mode of behaviour; we destroy it by contracting other relationships, by behaving differently toward one another… We are the State and we shall continue to be the State until we have created the institutions that form a real community.” You can't smash the state as an external thing, it is this networked relational form.


But the smart city is also a networked relational form. The relations span people, devices and infrastructures, with patterns of relationships modulated by algorithms. Can we use algorithms to contract other forms of relationship? Here, another distinctive aspect of Landauer’s politics becomes applicable. He said that rather than toppling the state, you have to overcome capital by leaving the current order. This is precisely the possibility raised by some current experiments in political prototyping through technology.


The one i want to look at is the blockchain, which is the technology behind Bitcoin. Bitcoin itself dispenses with the need for a central bank through having distributed ledger of transactions. These transactions can be trusted because of an algorithmic mechanism called 'proof of work' which is basically incorruptible because it's implemented through a cryptographic hashing function. The underlying mechanism is distributed, trustable records that don't require a centralised authority.


Many people are now looking at role that distributed, trustable records could play beyond cryptocurrencies, through forms of so-called smart contracts. This is where the blockchain could become a protocol for parallel structures.
Many people are now looking at role that distributed, trustable records could play beyond cryptocurrencies, through forms of so-called smart contracts.


Smart contracts enable, for example, decentralized autonomous organizations (DAOs). A DAO involves people collaborating with each other via processes recorded incorruptibly on the blockchain. While a lot of the speculation around smart contracts is libertarian, I agree with David Bollier's assessment that they also hold out the prospect for commons-based systems. A smart contract would straight away deal with issues such as the free rider problem, a.k.a. the tragedy of the commons. As the well-known hacker Jaromil, who works on a fork of bitcoin called Freecoin, says: "Bitcoin is not really about the loss of power of a few governments, but about the possibility for many more people to experiment with the building of new constituencies." It seems there could be prefigurative politics in these protocols.


One project implementing Freecoin is the Helsinki Urban Co-operative Farm. This is a community-supported agriculture project, where people collectively hire a grower but where participants can also volunteer to work in the fields. The agreement id that each member does at least 10 hours of work per year and there lots of other admin & logistical tasks that have to be done. The complex transaction types and numbers are becoming an issue for the collective, and the plan is for Freecoin to be a decentralized & transparent way to track & reward contributions, maintaing self-governance and avoiding the need to create a centralised institution.


Although this is only one small example of the application of the blockchain to common-pool resources, it is an eerie echo of Landauer, who's practical politics focused on communes for the collective production of food and other necessities. Overall, I'm suggesting that through technologies like the blockchain, Landauer's approach of leaving rather than confronting, reconstituting sets of relationships, and concentrating on common production, could be the Other of the Smart City.


Let me finish by returning to the topic of this panel: resilience and the future of democracy in the smart city. I think the current direction of travel, based on algorithmic preemption, is towards the post-democratic forms of neoliberal resilience. But it may be that the consequent creation of highly computational infrastructures is also an opening for decentralised autonomous organisation, enabling us to 'occupy' computation and implement a kind of exodus (in the spirit of Gustav Landauer) to more federal-communitarian forms supported by a protocols of commonality.

permalink ...

Data Science and Phrenology

I propose we look at data science through the historical lens of phrenology. This is not to denigrate data science but to take it seriously in it's claim to be a science, and examine its parallels with the methodological and social trajectories of phrenology as a scientific discourse. My aim is not to dismiss data science as pseudo-science but to explore the interplay of empirical and social factors in both phrenology and data science, as ways of making meaning about the world. By staying close to the practical techniques at the same time as reading them within their historical contexts, I attempt some grounded speculations about the political choices facing data science & machine learning.

In contrast to the philosophy and anatomy of the early nineteenth century, phrenology offered a plausible account of the connection between the mind and the brain by asserting that 'the brain is the organ of the mind'. Phrenologists believed that the brain is made up of a number of separate organs, each related to a distinct mental function, and the size of each organ is a measure of the power of its associated faculty. There were understood to be thirty seven faculties including Amativeness, Philoprogenitiveness, Veneration and Wit. The operations of phrenology were based on assessing the correlation between the topology of the skull and the underlying faculties, whose influence corresponded to size and therefore the specific shape of the head. It was used as a predictive empirical tool, for example to assist in the choice of servant.

The data science that is emerging in the second decade of the twenty-first century offers a plausible connection between the flood of big data and models that can say something meaningful about the world. The most widely used methods in data science can be grouped under the broad label of machine learning. In machine learning, algorithmic clustering and correlation are used to find patterns in the data that are 'interesting' in that they are both novel and potentially useful [1]. This discovery of a functional fit to existing data, involving an arbitrary number of variables, enables the predictive work that data science is doing in the world. While data mining was originally used to predict patterns of supermarket purchases, the potential to pre-empt risk factors is leading to the wide application of data science across areas such as health, social policy and anti-terrorism.

The newly developed technique of phrenology was most actively studied in Britain in the years 1810-1840. One of the factors that made it popular was the accessibility of the method to non-experts. For leading exponents such as George Combe it was a key principle that people were able to learn the methods and test them in practice: 'observe nature for yourselves, and prove by your own repeated observations the truth or falsehood of phrenology'. Some historians, such as Steven Shapin, have interpreted British phrenology as a social challenge to the the elitist control of knowledge generation, with a corresponding commitment to broadening the base of participation [2]. Shapin saw this as evidence that social factors as well as intrinsic intellectual factors help explain the work done by early phrenology, which 'enabled the participation in scientific culture of previously excluded social groups'.

A stronghold of historical phrenology in Britain was Edinburgh, where it was strongly associated with a social reformist agenda. Phrenologists there believed that the assessment of character from the shape of the skull was not the final word but a starting point for self and social improvement, because 'environmental influences could be 'brought to bear to stir one faculty into greater activity or offset the undesirable hyper-development of another. Not just the size but the tone of the organ was responsible for the degree to which its possessor manifested that behaviour' [3]. Advocates of phrenology such as Mackenzie asserted that 'until mental philosophy improves, society will not improve' and many felt that their science should influence policies on broad social issues such as penal reform and the education of the working classes.

As it stands now, data science is a highly specialised activity restricted to a narrow group of participants. The fact that data science is seen as a strategic expertise, combined with the small number of trained practitioners, has led to the demand far outstripping the supply of data scientists and its identification by the Harvard Business Review as 'the sexiest job of the 21st Century'. Most data scientists outside of academia are employed either by large corporations and financial institutions or by entrepreneurial start-ups. In terms of its social and cultural positioning, data science as we know it is a hegemonic activity.

Using the predictions of data science to drive pre-emptive interventions is also seen as having a social role. However, the form of these social interventions is shaped by the actors who are in a position to deploy data science. The characterisation of data science as a tool of the powerful derives not only from the algorithmic determination of parole conditions or predictive policing, but from its embedding within a hegemonic world view. The forms of algorithmic regulation promoted by people like Tim O'Reilly have become algorithmic governance. Predictive filtering dovetails with the 'fast policy' of behavioural insight teams, as they craft policy changes to choice architecture of everyday life.

In the 1840s phrenology ran in to problems, with increasingly successful empirical challenges to its validity. In particular, critics questioned whether the external surface of skull faithfully represented the shape of the brain underneath. If not, as came to be accepted, phrenology could no longer claim a correspondence between observations of the skull and the faculties of the individual. Supporters continued to defend phrenology on the basis of its utility rather than using measurement as a criteria: 'we have often said that Phrenology is either the most practically useful of sciences or it is not true'. But by the mid 19th century both specific objections and the general advance of the scientific method left phrenology discredited.

Unfortunately, phrenology underwent a revival in the late C19th and early C20th as part of a broad set of ideas known as scientific racism. This field of activity used scientific techniques such as craniometry (volumetric measurements of the skull) to support a belief in racial superiority; 'proposing anthropologic typologies supporting the classification of human populations into physically discrete human races, that might be asserted to be superior or inferior'. It was used in justifying racism and other narratives of racial difference in the service of European colonialism; for example, during the 1930s Belgian colonial authorities in Rwanda used phrenology to explain the so-called superiority of Tutsis over Hutus.

In 1950 UNESCO statement on race formally denounced scientific racism, saying "For all practical social purposes 'race' is not so much a biological phenomenon as a social myth. The myth of 'race' has created an enormous amount of human and social damage." However, the concept of race has been re-mobilised inside genomics, one of the crucibles of data science. Rather than Human Genome Project closing the door on the idea of race having a biological foundation, as many had hoped, some studies suggest that 'racial population difference became a key variable in studying the existence and meaning of difference and variation at the genetic level'.

The jury is still out on the long term validity of data science as an empirical method of understanding the world. Certainly there is a growing critique, largely based on privacy and ethics but also on the substitution of correlation for causation and the over-arching idea that metrics can be a proxy for meaning. I have written elsewhere about the potential already immanent in algorithmic governance to produce multiple states of exception [4]. However, my purpose here is a different one; to see the unfolding path of data science as propelled by both methodological and social factors and to use the completed trajectory of phrenology as a heuristic comparison.

Instead of being disheartened that, despite the bigness of data and the sophistication of machine learning algorithms, empirical activity is still imbricated with social values, we should recognise this as a continuing historical dynamic. This can be mobilised explicitly to offer a more hopeful future for data science and machine learning than one that derives only from the financial or governmental hegemony. Like the phrenologists of nineteenth century Edinburgh, we can choose to see in the methodologies of machine learning the opportunity to increase participation and social fairness. This can be imagined, for example, though the application of participatory action research to the process of data science. As Mackenzie wrote about phrenology "the most effectual method" (of error checking) was "to multiply, as far as possible, the number of those who can observe and judge". It is as yet a largely unexplored research question to ask how data science can be democratic, and how we can develop a machine learning for the people.

[1] Han, Jiawei, Micheline Kamber, and Jian Pei. Data mining: concepts and techniques: concepts and techniques. Elsevier, 2011.

[2] Shapin, Steven. "Phrenological knowledge and the social structure of early nineteenth-century Edinburgh." Annals of Science 32.3 (1975): 219-243.

[3] Cantor, Geoffrey N. "The Edinburgh phrenology debate: 1803–1828." Annals of Science 32.3 (1975): 195-218.

[4] McQuillan, Dan. ‘Algorithmic States of Exception’. European Journal of Cultural Studies 18.4-5 (2015): 564–576. ecs.sagepub.com.

permalink ...

Hannah Arendt and Algorithmic Thoughtlessness

Presented at the London Conference on Critical Thought in June 2015

I want to warn of the possibility that algorithmic prediction will lead to the production of thoughtlessness, as characterised by Hannah Arendt.

This will come from the key characteristics of the algorithmic prediction produced by data science, such as the nature of machine learning and the role of correlation as opposed to causation. An important feature for this paper is that applying machine learning algorithms to big data can produce results that are opaque and not reversible to human reason. Nevertheless their predictions are being applied in ever-wider spheres of society leading inexorably to a rise in preemptive actions.

The many-dimensional character of the 'fit' that machine learning makes between the present and the future, using categories that are not static or coded by humans, has the potential for forms of algorithmic discrimination or redlining that can escape regulation. I give various examples of predictive algorithms at work in society, from employment through social services to predictive policing, and link this to an emerging govermentality that I described elsewhere as 'algorithmic states of exception' [1].

These changes have led to a rapid rise in discourse on the implications of predictive algorithms for ethics and accountability [2]. In this paper I consider in particular the concept of 'intent' that is central to most modern legal systems. Intent to do wrong is necessary for the commission of a crime and where this is absent, for whatever reason, we feel no crime has been committed. I draw on the work of Hannah Arendt and in particular her response to witnessing the trial of Adolf Eichmann in Jerusalem in 1961 [3] to illuminate the impact of algorithms on intent.

Arendt's efforts to comprehend her encounter with Eichmann led to her formulation of 'thoughtlessness' to characterise the ability of functionaries in the bureaucratic machine to participate in a genocidal process. I am concerned with assemblages of algorithmic prediction operating in everyday life and not with a regime intent on mass murder. However, I suggest that thoughtlessness, which is not a simple lack of awareness, is also a useful way to assess the operation of algorithmic governance with respect to the people enrolled in its activities.

One effect of this is to remove accountability for the actions of these algorithmic systems. Drawing on analysis of Arendt's work [4] I argue that the ability to judge is a necessary condition of justice; that legal judgement is founded on the fact that the sentence pronounced is one the accused would pass upon herself if she were prepared to view the matter from the perspective of the community of which she is a member. As we are unable to understand the judgement of the algorithms, which are opaque to us, the potential for accountability is excised. I also draw on recent scholarship to suggest that, due to the nature of algorithmic categorisation, critique of this situation is itself a challenge [5]. Taken together, these echo Arendt's conclusion that what she had witnessed had "brought to light the ruin of our categories of thought and standards of judgement".

However, Arendt's thought also offers a way to clamber out of this predicament through the action of unlearning. Her encounter with Eichmann was a shock; she expected to encounter a monster and instead encountered thoughtlessness. Faced with this she felt the need to start again, to think differently. A recent book by Marie Luise Knott describes this as unlearning, "breaking open and salvaging a traditional figure of thought and concluding that it has quite new and different things to say to us today" [6].

We need to unlearn machine learning. A practical way to do this through the application of participatory action research to the 'feature engineering' at the core of data science. I give analogous examples to support this approach and the overall claim that it is possible to radically transform the work that advanced computing does in the world.

[1] McQuillan, Daniel. 2015. Algorithmic States of Exception. European Journal of Cultural Studies, 18(4/5), ISSN 1367-5494

[2] Algorithms and Accountability Conference, Information Law Institute, New York University School of Law, February 28th, 2015.

[3] Arendt, Hannah. Eichmann in Jerusalem: A Report on the Banality of Evil. 1 edition. New York, N.Y: Penguin Classics, 2006.

[4] Menke, C. & Walker, N.(2014). At the Brink of Law: Hannah Arendt’s Revision of the Judgement on Eichmann. Social Research: An International Quarterly 81(3), 585-611. The Johns Hopkins University Press.

[5] Antoinette Rouvroy. "The end(s) of critique : data-behaviourism vs. due-process." in Privacy, Due Process and the Computational Turn. Ed. Mireille Hildebrandt, Ekatarina De Vries, Routledge, 2012.

[6] Knott, Marie Luise, 2014. Unlearning with Hannah Arendt, New York: Other Press.

permalink ...

Data Luddism

I propose Data Luddism as a radical response to the productive power of big data and predictive algorithms. My starting point is not the Romantic neo-Luddism of Kirkpatrick Sale but the historical Luddism of 1811-1816, and the Luddites' own rhetoric regarding their resistance to 'obnoxious machines' [1].

The Luddites' opposition to steam-powered machines of production was based on the new social relations of power they produced, which parallels the present emergence of data-powered algorithmic machines. As discussed in my paper on Algorithmic States of Exception [2] the operations of machine learning and datamining and the production of predictive knowledge is leading to the irruption of preemption across the social field, from employment to social services and policing. The consequent loss of agency and establishment of new powers unbalanced by effective rights can be fruitfully compared to the effect of new machinery on nineteenth century woolen and silk industries. Based on this I examine key aspects of Luddite resistance for their contemporary relevance.

Compare the adoption of a collective name ('General Ludd') and the evolution of Luddism as it expanded from the customary communities of Nottinghamshire through metropolitan Manchester and the radicalised West Riding, to the trajectory of the contemporary hacktivist movement Anonymous. It is critical to recall the political sophistication of the Luddites and the way machine breaking was situated in a cycle of negotiation, parliamentary petition and combination, and ask what this means for a contemporary resistance to data power that restricts itself to issues of privacy and ethics.

Most importantly, the Luddites had an alternative social vision of self-governance and community commons and that we, too, should posit a positive vision against the encroachment of algorithmic states of exception. However, I ask whether (in contrast to the Luddites) we can use the new machines to bring these different possibilities in to being. The Luddites saw themselves as a self-governing socius, which we can compare torecent experiments in technology enabled self-organisation such as 'liquid democracy' software.

Beyond this, we should focus on the Luddites call to 'put down all Machinery hurtful to Commonality' to ask if we can adapt the machines to support the commons. An example is the recent proposal that the blockchain (the technology behind bitcoin) can enable distributed collaborative organizations and tackle traditional issues related to shared common-pool resources, such as the free rider problem [3]. If we are serious about resisting the injustices that could come from data-driven algorithmic preemption we have a lot to learn from the historical Luddites, but also that we have the opportunity to 'hack' the machines in the service of a positive social vision.

[1] Binfield, K. ed., 2004. Writings of the Luddites, Baltimore: Johns Hopkins University Press.

[2] McQuillan, D., 2015. Algorithmic states of exception. European Journal of Cultural Studies, 18(4-5), pp.564–576. Available at: http://ecs.sagepub.com/content/18/4-5/564

[3] David Bollier, 2015. The Blockchain: A Promising New Infrastructure for Online Commons. Available at: http://www.bollier.org/blog/blockchain-promising-new-infrastructure-online-commons

permalink ...

Seventeen Theses on DIY Science

An opening provocation to the 'DIY Science: the challenges of quality' at the European Commission Joint Research Centre, Ispra 16-5-15

  1. diy science doesn't happen inside walls with armed guards [note: this refers to the workshop venue]
  2. the question of quality is really a question of objectivity
  3. leaving the scientific hegemony doesn't mean pure relativism; that is scare tactics
  4. use donna haraway's situated knowledge: objectivity is about particular embodiment, not the god trick
  5. "only partial perspective promises objective vision" (haraway)
  6. the question of DIY science is a question of self-governance: learn from other struggles e.g. luddites
  7. 'open' alone won't save science;  learn from open data that does bad as well as good
  8. cultivate disrespect for scientific authority (not dismissal, based on historical contingency of knowledge)
  9. book proposal: "the joy of empirical discovery" modelled on alex comfort's 1972 book "the joy of sex"
  10. science is weak, so it's a good time to be pushing
  11. DIY science is based on social justice - DIY science should always 'punch up'
  12. beware recuperation: we don't do deconstruction to benefit the religious rightwing #1980s   
  13. the hack-fab complex could be an opening for neoliberalism c.f. squatting. beware assimilation!
  14. DIY science & repression: when you start to make a difference there will be arrests.   what are you willing to risk?
  15. DIY science should seek social movements
  16. DIY science is transformative: coming to know should change the knower. find affinity with indigenous communities
  17. the universe is a trickster. "Feminist objectivity makes room for ironies at the heart knowledge production"
permalink ...