We live in a world where algorithms quietly make decisions that affect our lives. They determine which résumés are read by employers, which neighborhoods receive loans, which patients get flagged for extra medical care. These systems are built on data, and often the data is not neutral. It carries the history of our choices, our exclusions, and our prejudices. This is what makes hidden bias, also known as unconscious or implicit bias, so dangerous. It lurks within the numbers, shaping outcomes in ways that seem objective but often perpetuate injustice.
The Problem of Hidden Bias
Artificial intelligence (AI) is only as good as the data it consumes. If past hiring practices undervalued women or people of color, those patterns live on in the datasets used to train new systems. That is exactly what happened when Amazon tested an AI hiring tool that downgraded résumés with indicators of women’s experience, because the system had been trained on male-dominated historical data (Jeffrey Dastin, “Amazon Scraps Secret AI Recruiting Tool That Showed Bias against Women,” Reuters, 2018).
In the criminal justice system, the COMPAS risk-assessment tool provides another example. Designed to predict recidivism, it disproportionately flagged Black defendants as “high risk” compared to white defendants with similar records (Angwin et al., “Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased Against Blacks,” ProPublica, 2016). The algorithm claimed neutrality, but the data reflected a long history of racial disparity in policing and sentencing.
Even when the intent is fairness, the outcome can reinforce inequity. Hidden bias is especially dangerous because it hides under a cloak of neutrality. Algorithms do not announce, “I am discriminating.” They produce numbers, rankings, and risk scores that appear impartial. Yet the harm is real: people denied jobs, mortgages, or health care because historical prejudice is disguised as data.
Overcoming Bias Through Equity, Diversity, and Inclusion
If the problem begins with biased data, the solution begins with careful attention to the data we use. Developers must ask hard questions: Who is missing from this dataset? Whose voices, experiences, or histories are underrepresented? How might the way we collect or label data reinforce systemic inequality?
Equity demands more than equal treatment; it requires compensating for uneven starting points. For example, a hiring algorithm must not only avoid excluding underrepresented groups but also ensure that those groups are fairly represented in the training data. Diversity and inclusion are not afterthoughts; they are safeguards against hidden bias. In practice, this means building teams with varied backgrounds to design, test, and audit AI systems, so that blind spots do not go unnoticed.
The Positive Role AI Can Play
The irony is that AI, when properly directed, can help expose the very biases it risks perpetuating. Researchers have used machine learning to analyze pay gaps across industries, uncovering hidden patterns of discrimination (Barocas, Hardt, and Narayanan, Fairness and Machine Learning, 2019, 180, 207). In housing, investigative data analysis has identified continuing disparities in mortgage-approval algorithms (Aaron Sankin, Emmanuel Martinez, and Lauren Kirchner, “The Secret Bias Hidden in Mortgage-Approval Algorithms,” The Markup, 2021). By sifting through massive datasets, algorithms can highlight where injustice is embedded, giving advocates and policymakers tools to correct it.
Here, AI becomes not just a mirror of past bias but a lever for greater fairness. The key is intentionality: systems must be designed and evaluated with equity and inclusion at the center, not
left to profit-driven motives or assumptions of neutrality.
Catholic Social Teaching and the Common Good
Catholic Social Teaching provides a moral framework for this work. Pope Francis in Fratelli Tutti reminds us that societies fractured by exclusion and inequality cannot claim justice (2020, nos. 20, 29). Solidarity demands that we stand with those who suffer from systemic bias, even when that bias is hidden within data. The common good requires that technology serve everyone, not just the privileged few.
CST also calls us to honor the dignity of each person. This means resisting any reduction of human beings to statistics, categories, or probabilities. People are not data points; they are children of God, each with unique worth.
A Path Toward Just Algorithms
The future of AI depends on how we address hidden bias today. If left unexamined, algorithms will quietly deepen inequities, embedding past injustice into future systems. If approached with humility, vigilance, and a commitment to equity, AI can instead become a force for inclusion, fairness, and solidarity.
Justice in the algorithm is possible. It requires us to see hidden bias for what it is, to challenge it with diverse perspectives and equitable practices, and to orient technology toward the common good. In this way, algorithms can become not instruments of exclusion but tools of what Pope Francis calls “social friendship,” where all have a place at the table (Fratelli Tutti, 2020).
If you would like to make a comment or ask a question, I can be contacted at dtheroux@smcvt.edu. Let’s talk!

For all press inquiries contact Elizabeth Murray, Associate Director of Communications at Saint Michael's College.




