
Neutral – Detecting Subconscious Bias in Recruitment
Neutral was a hackathon project aimed at addressing subconscious bias in recruitment processes. Recruiters — like all of us — can carry unintentional biases that seep into decision-making. The idea was to build a tool that could objectively assess hiring decisions and flag potential bias based on the language used in CVs and rejection/acceptance letters.
Users would upload a candidate's CV alongside a decision document (e.g., rejection letter). Our model, built using basic ML pipelines and Streamlit for the interface, would parse the content, extract relevant features, and then offer a judgment — either supporting or questioning the fairness of the decision.
Key Learnings & Honest Reflection:
- The project aligned with SDG goals related to gender equality, decent work, and reducing inequalities.
- Developed a basic ML model to analyze hiring decisions from textual inputs.
- Built a clean and functional Streamlit frontend for quick deployment and testing.
- Placed 6th out of 57 teams in a hackathon — despite realizing the approach was fundamentally flawed.
While the idea sounded noble, in execution, it was overly simplistic and leaned on many assumptions. Subconscious bias isn’t easily quantifiable by ML, and attempting to "approve" or "reject" a human decision via a few text cues quickly became... well, dumb. But hey — we learned a lot.
I reflected more deeply on this experience in an article titled “Exploring Bias in Recruitment, an Attempt to Solve it – and Why It Was the Worst Idea Ever.” If you’re curious about the pitfalls of applying AI to complex human problems, it’s worth a read.
Technologies Used:
- Python
- Streamlit
- Basic ML classification pipelines