top of page

What is what?

What do we care about?

Matrix of Domination

"Matrix of domination is a theoretical approach that explores the interlocking systems of oppression in terms of race, gender, class, and other social categories faced by marginalized or othered people. It theorizes power in four domains: structural, disciplinary, hegemonic, and interpersonal. Originally used by Patricia Hill Collins in relation to the discrimination and subsequent struggle for equality of black American women, the theory is grounded in black feminist epistemology. It privileges the voices and experiences of those in the margins."


Data Feminism

“Data Feminism is a way of thinking about data that’s informed by direct experience, commitment to action, and intersectional feminist thought. This definition acknowledges that there is an unequal distribution of power in the world.”

Data Colonialism

“An emerging order for the appropriation of human life so that data can be continuously extracted from it for profit. The historic appropriation of land, bodies, and natural resources is mirrored today in this new era of pervasive datafication.” -Ulises A Mejias & Nick Couldry

Big Dick Data

“A term coined to denote big data projects that are characterized by pathriarchal, cis-masculinist, totalizing fantasies of world domination as enacted through data capture and analysis.Big dick data projects ignore context, fethishize size and inflate their technical scientific capabilities." -Catherine D'ignazio & Lauren F. Klein

In which context?

Artificial Intelligence
The simulation of human intelligence processes by machines, especially computer systems.


Bias

A disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair.


Algorithmic Bias
Unjust, unfair, or prejudicial treatment of people related to race, income, sexual orientation,  religion, gender, and other characteristics historically associated with discrimination and  marginalization, when and where they manifest in algorithmic systems or algorithmically  aided decision-making.

            

Bias, which?

Historical Bias
Historical bias arises even when the data are perfectly measured and sampled if the world is as it is or was leads to a model that produces detrimental outcomes. Such a system, even if it accurately reflects the world, can still harm the population. Consideration of historical bias often involves assessing the harm that representation represents (such as reinforcing stereotypes) to a particular group.

Representation Bias

It can arise for several reasons: When defining the target population, but it does not reflect the total population or when defining the target population, if contains under-represented groups or when the population of interest has changed or is distinct from the population used during model training.

Measurement Bias

Occurs when choosing, collecting, or computing features and labels to use in a predication problem. It can arise in several ways: the granularity of data varies across groups or the quality of data varies across groups or the defined classification task is an oversimplification.

Aggregation Bias

It arises when a one-size-fits-all model is used for data in which there are underlying groups that should be considered differently.


Learning Bias

Learning bias arises when modelling choices amplify performance disparities across different examples in the data.


Evaluation Bias

Arises when the benchmark data does not represent the use population. A model is optimized on its training data, but its quality is often measured on benchmarks. A misrepresentative benchmark encourages the development and deployment of models that perform well only on the subset of the data represented by the benchmark data.

Deployment Bias

Arises when there is a mismatch between the problem a model is intended to solve and the way in which it is actually used.

“Although neural networks might be said to write their own  programs, they do so towards goals set by humans, using  data collected for human purposes. If the data is skewed,  even by accident, the computers will amplify injustice.”

-The Guardian

What else to read/see/check?

(Click on the images to reach the resources.)

Algorithms-of-Oppression-NOBLE.jpg
81e-vyUaB6L.jpg
_collid=books_covers_0&isbn=9780262044004&type=.jpg
81y93o7IKdL.jpg
91+V67CA47L.jpg
71HGbZqJLaL.jpg
41l5pMimrPL._SX331_BO1,204,203,200_.jpg
914mYHoLgXL.jpg
vf.jpg
cover_quadrat-formatkey-jpg-w320rq_edited.jpg
Installation+shot+of+piece,+SOHO20+March+2018_edited.jpg
170119_modernlux_eyebeam-674_final (1).jpg
Ez6WpJXUcAEnBD-.jpg
Algorithmic_Justice_League_Logo.png
The-Tactical-Technology-Collective-Logo.png
datajusticelab-logo.png
1_nHeugzo4Dl-mNd2pMq5IVg.png
unnamed.jpg
1_bxlHoBq-kViakcwga4tV9w.png
download.jpg
bottom of page