Break Through Tech AI Fellows are tackling bias in crime prediction models as part of their AI Studio Challenge Project with ABT Global.
In a world increasingly driven by data, artificial intelligence (AI) holds significant potential to enhance public safety and inform criminal justice policies. However, when left unchecked, biases within crime data can lead to skewed predictions that disproportionately affect certain communities, reinforcing existing inequities. This is the complex issue Break Through Tech AI fellows are addressing through their 2024-2025 AI Studio Challenge Project in collaboration with ABT Global, a communications and marketing firm dedicated to social impact. Together, they are developing AI methodologies that can identify, analyze, and reduce bias in crime prediction models.
Why Tackling Bias in Crime Prediction Matters
Crime prediction models rely heavily on historical data, yet the historical context of crime reporting often includes biases related to factors like race, socioeconomic status, and geographic location. These biases can shape AI predictions, leading to potential disparities in how law enforcement resources are allocated. By scrutinizing the data and methods underlying predictive policing tools, Break Through Tech AI fellows are working to ensure fairer outcomes in the justice system—an important mission for our world today.
The Approach: Ethical AI in Action
To ensure fair and accurate crime prediction, Break Through Tech AI fellows are using the following approach:
- Exploratory Data Analysis (EDA): Fellows begin by analyzing FBI Crime Data to identify patterns that may indicate underlying biases, using unsupervised learning techniques to detect anomalies.
- Model Training and Evaluation: Supervised learning methods are applied to crime prediction models to assess how identified biases affect model performance.
- Bias Mitigation: Fairness-aware algorithms are explored to mitigate bias. Fellows use AI fairness tools, such as Fairlearn and AIF360, to systematically address biases within the data, testing how various approaches improve fairness in predictions.
Building Trust and Ensuring Fairness
Ultimately, this project goes beyond technical refinement; it represents a step toward greater social equity in AI applications. Fair and transparent crime prediction models can build public trust and foster more effective, data-driven crime prevention strategies that better serve all communities. By participating in this project for the 4-month program, Break Through Tech AI fellows are putting their machine learning training to use and gain real-world experience in ethical AI practices.
ABT Global’s commitment to improving the quality of life and economic well-being of people worldwide amplifies the often interdisciplinary work our fellows embark on as they train to become the technologists of tomorrow. Our shared mission of creating a more equitable world is embodied in this partnership as we prepare, immerse, and propel female tech talent into fields that are defining the future; and how experiential learning with industry collaboration can drive meaningful change.
Learn more about Break Through Tech’s AI Program and how your organization can participate.