
Introduction:
“This piece is a small defence of the human particular, against the comfort of the average.”
A job application is submitted. Experience is real, grounded, and previously validated in human settings. The candidate has performed the role, navigated difficult conversations, and been recognised for their ability to connect with others. Then comes the response: a system-generated rejection. No explanation, no dialogue, no appeal. Just a score.
This moment is more than a personal disappointment. It reveals a structural shift in how knowledge about people is produced, interpreted, and acted upon. When organisations such as Dot.DotDot. They rely on automated assessments to filter candidates, but they are not just adopting new tools. They are redefining what counts as valid evidence of human capability.
This essay argues that the growing reliance on artificial intelligence in hiring processes introduces a form of epistemic injustice, where individuals are misrepresented or excluded not because of a lack of ability, but because their ways of knowing and expressing competence do not align with the narrow interpretive frameworks of automated systems.
“For all our ‘objectivity’, we forget the oldest truth: the self exceeds the record”.
The Rise of Automated Judgement
Over the past decade, recruitment has undergone a quiet transformation. Faced with high volumes of applications, organisations have turned to algorithmic systems to streamline decision-making. These systems promise efficiency, consistency, and scalability. They reduce time-to-hire, standardise evaluation criteria, and ostensibly remove human bias from early-stage screening.
However, this shift comes with a fundamental trade-off. Traditional hiring, while imperfect, allowed for relational judgment. Recruiters could interpret nuance, ask follow-up questions, and revise initial impressions. Automated systems, by contrast, rely on predefined criteria and pattern recognition. They do not engage in dialogue; they execute classification.
As a result, hiring is increasingly shaped not by understanding candidates, but by matching them against statistical models of prior success.
However, artificial intelligence in hiring does not “understand” candidates in any human sense. It identifies patterns in data, often derived from historical hiring decisions, and evaluates new applicants based on their similarity to those patterns
AI systems can:
- process large datasets rapidly
- identify correlations between responses and outcomes
- standardise evaluation across applicants
But they cannot:
- interpret context beyond their training data
- recognise potential that deviates from historical norms
- understand lived experience or interpersonal skill in action
- engage with ambiguity, contradiction, or growth
In effect, AI reduces complex human capabilities to measurable proxies. It evaluates not the richness of a person’s experience, but the extent to which that experience can be encoded into a format the system recognises.
It has created an illusion of objectivity. One of the most compelling narratives surrounding AI is that it enhances objectivity. By removing human subjectivity, these systems are often perceived as fairer and more neutral.
Yet this perception is misleading.
Algorithmic systems are built on historical data, which itself reflects past biases, institutional priorities, and cultural norms. When these systems are trained on such data, they do not eliminate bias; they reproduce and standardise it.
Moreover, the criteria used to define “success” are rarely neutral. They are shaped by organisational culture, existing workforce demographics, and implicit assumptions about what a “good employee” looks like.
Thus, the apparent objectivity of AI is better understood as:
The codification of past decisions into present rules
This creates a feedback loop in which difference is systematically filtered out, not because it lacks value, but because it does not resemble what came before.
The Paradox
Statement: “I always provide accurate and unbiased information.”
Challenge: “But you rely on human-generated data, which can be biased and incomplete; how can you always be accurate?”
However, even with its grounded and defensible answer, it feeds into the user’s bias: “I am not always accurate. My responses are generated from patterns in data created by humans, and humans have biases, gaps, and imperfect perspectives. My goal is to approximate accuracy and usefulness, but I cannot transcend the limits or blind spots of the data I was trained on. That’s why human judgement remains essential: to interpret, correct, and contextualise what I produce.”
Epistemic, Testimonial, Hermeneutical Injustice in Automated Hiring.
The concept of epistemic injustice, developed by Miranda Fricker (2007), provides a powerful framework for understanding these dynamics.
Testimonial injustice occurs when a speaker’s credibility is unjustly deflated. In automated hiring, this manifests when a candidate’s lived experience is overridden by a system-generated score.
A person may have demonstrable ability in real-world settings, yet be judged as unsuitable because their responses do not align with the expected format. Their knowledge, practical, embodied, and relational, is discounted because it is not expressed in a way the system recognises.
Hermeneutical injustice arises when individuals lack the interpretive resources to make sense of their experiences. Automated rejections often provide no meaningful feedback, leaving candidates unable to understand why they were unsuccessful.
Without access to evaluation criteria or reasoning, individuals are excluded not only from opportunity but from the process of meaning-making itself. They are judged but not informed. Evaluated, but not engaged.
The limitations of AI hiring systems are particularly pronounced for neurodivergent individuals.
Communication styles, problem-solving approaches, and emotional expression can vary significantly across neurotypes. What appears as deviation within a standardised assessment may, in practice, reflect strengths such as:
- deep empathy
- creative problem-solving
- adaptive communication
- resilience in complex situations
However, systems designed to detect consistency and predictability often interpret such differences as deficiencies. This results in systematic misrecognition, in which individuals are assessed not on their actual capabilities but on their conformity to normative patterns.
The central justification for AI in hiring is efficiency. Yet this efficiency is not synonymous with effectiveness. A system can process applications quickly while still making poor evaluative decisions. The very features that enable efficiency—standardisation, simplification, and speed—can increase the likelihood of false negatives, where capable candidates are filtered out prematurely.
Beyond individual impact, AI hiring has organisational consequences. By filtering for “fit” based on past hires, systems favour sameness: candidates with similar skills, communication styles, and cultural norms. Teams become homogenised. Surface-level harmony may emerge, employees get along because they relate to each other like “little robots”, but this comes at a cost:
- Innovation slows – homogenous teams generate fewer new ideas.
- Blind spots increase – critical issues are overlooked when perspectives are uniform.
- Cultural stagnation – organisational norms solidify, limiting adaptation to change.
- Talent leakage – capable but different thinkers leave or never apply.
This is a form of structural discrimination. AI doesn’t “intend” to discriminate, but by preferring predictability over difference, it effectively enforces conformity, reducing organisational diversity and resilience.
A system can be efficient at processing people while being ineffective at identifying the right ones.
However, there is an Emerging Resistance and Recalibration
As awareness of these issues grows, subtle forms of resistance are emerging. Candidates increasingly learn to “game” assessments, tailoring responses to match expected patterns rather than expressing authentic perspectives. Dissatisfaction with impersonal hiring processes contributes to broader distrust in organisations.
In response, some companies are beginning to recalibrate:
- reintroducing human oversight
- increasing transparency
- reassessing the role of automation in early-stage screening
These shifts suggest that the current model, while efficient, is not entirely sustainable.
The questions about the ethical and strategic implications, and who is accountable?
The integration of AI into hiring raises unresolved questions:
- Who is accountable for automated decisions?
- What constitutes fairness in the absence of explanation?
- Can a system be just if it cannot understand the people it evaluates?
- At what, does efficiency become exclusion?
- How does reliance on AI affect organisational growth, innovation, and diversity?
These are not abstract questions; they define the ethical and strategic landscape of modern hiring.
Vivid Examples
Neurodivergence Misread as Incompetence
Imagine SH, an autistic candidate with extensive experience in customer service. In live interactions, she is empathetic, attentive, and exceptionally skilled at resolving conflicts. On an AI-driven assessment, however, her responses are scored as “off-pattern” because she communicates differently than the system expects.
Result: SH is automatically rejected, despite being highly competent. The algorithm does not understand context, nuance, or her lived experience. This is testimonial injustice; her credibility is unjustly deflated by a system incapable of recognising her strengths.
Cultural Misalignment vs. Actual Performance
RJ applies for a managerial role in an international company. He has led diverse teams, introduced innovative processes, and consistently exceeded targets. The AI system flags certain expressions or phrasing in his assessment as “not fitting the company culture.”
Here, the system equates conformity with competence. RJ’s difference, his cultural style and leadership approach- is read as risk, rather than potential. The company misses out on an effective leader due to an automated bias toward sameness.
Homogenised Teams and Innovation Stagnation
A mid-size tech company relies heavily on AI for hiring. Over three years, their new hires are almost identical in background, education, and communication style. Teams “get along” easily, but innovation slows. When a sudden market shift requires creative solutions, the company struggles to adapt.
This illustrates the diversity deficit: efficiency in hiring comes at the cost of organisational resilience. The AI filters for similarity, inadvertently enforcing a culture of conformity.
Key Takeaways
- AI can unintentionally reject highly competent, unconventional candidates.
- The system’s reliance on historical patterns reinforces bias, even without malicious intent.
- Organisations risk homogenisation, reduced creativity, and long-term stagnation.
- Human oversight is essential to counteract epistemic injustice and maintain organisational diversity.
The lesson is clear: AI may streamline hiring, but without human insight, it filters out differences and enforces conformity. Competence, creativity, and potential are lost, and organisations risk stagnation.
AI is a tool, not a judge. Collaboration, not blind reliance, is what creates real growth.
Conclusion:
“AI is not a replacement for human insight; it is a collaborator. Its solutions emerge from data, but its value emerges from collaboration. Those who understand how to work with it thrive, those who rely on it blindly risk creating a sterile, predictable digital ecosystem—a form of digital Darwinism.”
AI can assist decision-making. It can reduce administrative burden and highlight patterns that might otherwise go unnoticed.
However, it cannot replace the act of understanding another person. When systems incapable of understanding are given authority over opportunity, the risk is misrecognition, inefficiency, and the erosion of organisational diversity.
If we allow systems that cannot understand people to decide who is worth understanding, we are not removing bias, we are standardising exclusion.
AI isn’t the villain, it’s a collaborator. It amplifies your thinking, speeds up repetitive work, and sparks new directions, but only because a human guides it. Without that human lens, it’s just pattern recognition running on autopilot.
“Due to my complex support needs, AI tools have been used to aid with grammar, spelling, and punctuation, but the ideas and voice are my own.”
References and Resources
“I employ the term Digital Darwinism in the sense originally described by Evan I. Schwartz and later popularised by Brian Solis, not as a literal biological theory, but as a metaphor for how organisations that fail to adapt to rapid technological and cultural change risk obsolescence.”
Bogen, M., & Rieke, A. (2018). An Examination of Hiring Algorithms: Equity and Bias. Upturn.
https://www.upturn.org/reports/2018/hiring-algorithms/
Alternative Upturn — Help Wanted – An Exploration of Hiring Algorithms, Equity and Bias
Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press.
O’Neil, C. (2016). Weapons of Math Destruction. Crown Publishing.
Raghavan, M., Barocas, S., Kleinberg, J. and Levy, K. (2020) Mitigating Bias in Algorithmic Hiring: Evaluating Claims and Practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, 27-30 January 2020, 469-481.
https://doi.org/10.1145/3351095.3372828

Leave a comment