Machine Learning Generated Imagery


Machine Bias

I generated future prisoners by training a machine learning model (GAN) on thousands of photographs of US inmates, as a metaphor and visualization of predictive policing systems.

In more and more countries predictive policing and pretrial risk assessment decisions are handed over to machines and algorithms. How does a machine decide in which district crime is likely to happen, or which convict will be reoffending in the future? Any machine learning decision is based on massive amounts of data transformed into probabilities – but is that data only creating more of the same?


“May your coffee, pelvic floor, intuition and self-appreciation be strong.”