I’m Nushin, an Interaction Designer, Design Thinker & AI ethics researcher       

      currently freelancing for companies like IDEO, Technologiestiftung Berlin and Retune, and working for the Education Innovation Lab Berlin.

An Exploration of Bias in Machine Learning – Bachelor Thesis Book. 


Let’s get ethical!

The rapid rise of algorithm-based processes and systems in various parts of society and our everyday life demand that we critically consider their social impacts at all levels and early on. Products and services driven by AI technology provide the opportunity to support and enhance human capabilities and to relieve us of annoying repetitive work. We created artificially intelligent machines to make fair and automated decisions on humans – only to find that their decisions perpetuate our societies’ structural discrimination, oppression and injustice.

If we want to use algorithmic based technologies we must attempt to uncover latent social impacts early on, rather than after foundations have been laid. Many design decisions made by software engineers are very hard to undo or take back. Once they have reached a critical mass and other systems are built on top of them, they often remain integrated in future systems.


Bachelor Thesis Project – 
Design Futures Workshop 


Feminist AI Ethics Toolkit

How do we create a framework for the creation process of algorithmic decision­-making tools that adequately  considers ethical concerns, consequences, and causalities?

This design futures workshop, part of and inspired by the Feminist AI Ethics approach, aims to raise awareness and inspire conversations about the direct and secondary consequences of AI tools. Based on utopian and dystopian scenarios and prompts about what might happen with AI technologies in possible futures, participants can reflect and create concepts for actionable counter mechanisms together.


Machine Learning Generated Imagery


Machine Bias

I generated future prisoners by training a machine learning model (GAN) on thousands of photographs of US inmates, as a metaphor and visualization of predictive policing systems.

In more and more countries predictive policing and pretrial risk assessment decisions are handed over to machines and algorithms. How does a machine decide in which district crime is likely to happen, or which convict will be reoffending in the future? Any machine learning decision is based on massive amounts of data transformed into probabilities – but is that data only creating more of the same?


Communication Design &

With the Eyes of the Machine

As part of the »Lange Nacht der Wissenschaften 2018«, I researched and designed a set of postcards for the Technologiestiftung Berlin that portrayed various scenarios around the application of artificial intelligence technology.

Some of these examples show how the application of AI can indeed create meaningful, tangible benefits for society, while others reveal the heavy ethical dilemmas behind the use of this technology.


“May your coffee, pelvic floor, intuition and self-appreciation be strong.”