Public administration, like many sectors, struggles with bias in hiring. While AI promises more objective decisions, it’s not infallible—and humans often distrust or blindly follow it. The new CUBE project led by Filipa Almeida proposes a way to strike the right balance: building trust in AI systems and embedding nudges that promote critical thinking. The goal? Fairer, smarter, and more inclusive hiring in government institutions.

 

The Hiring Bias Problem: Human Judgment Under Pressure

Government institutions are tasked with representing and serving diverse populations. Despite intentions to be impartial, decision-makers often rely, sometimes unconsciously, on stereotypes, penalizing candidates based on gender, race, or other irrelevant factors. In Portugal, such patterns are well documented and continue to hinder progress toward workplace equity. This is not a uniquely Portuguese problem, technology giants like Google have faced similar issues. In 2018, Google scrapped its AI-powered recruitment tool after discovering it systematically downgraded CVs that included the word “women’s”. The AI had learned from historical hiring data—and repeated the past's gender biases. This underscores a key truth: even the most advanced AI is only as fair as the data it learns from.

Artificial Intelligence (AI) offers a compelling promise: it can analyze candidate data objectively, free from many of the biases that plague human judgment. But the use of AI in hiring comes with its own psychological hurdles. Research shows that public sector managers often distrust AI, particularly in subjective, people-centered tasks like hiring. This phenomenon, known as "algorithm aversion," leads to the rejection of helpful AI advice, even when it clearly improves decision quality.

Paradoxically, when people do trust AI, they may rely on it too much. Once perceived as trustworthy, AI can lead people to accept its advice without thinking, even when it’s wrong. This overreliance is no less dangerous than aversion. It undermines human oversight and can allow subtle algorithmic biases to slip through unchecked.

The new research project Debias Your Decisions with AI Insights: Balancing trust in Artificial Intelligence with Critical Thinking for Optimal Decision-making and Discrimination Eradication, led by Filipa Almeida at CATÓLICA-LISBON's CUBE, and funded under the  PRR from EU and FCT’s "Artificial Intelligence, Data Science and Cybersecurity" call, seeks to redesign how AI tools are used in public sector hiring. This interdisciplinary project combines expertise from Public Administration, Organizational Behaviour, Human-Computer Interaction, and Social Psychology to study the psychology of AI reliance, what makes people trust AI, when they trust it too much, and how to foster balanced use.

“The goal of this project is to explore how artificial intelligence can help reduce discrimination in public sector recruitment. We approach this challenge in two main ways: first, by conducting experimental studies to identify effective strategies; and second, by developing an AI-based tool grounded in those findings. This tool will then be refined through scenario-based experiments and tested in more realistic settings.” - Filipa Almeida

To understand how to cultivate appropriate reliance on AI in public hiring decisions—not too little, not too much. Researchers will examine how trust in AI can be influenced, and how to prevent overreliance through the strategic use of behavioral nudges.

Key Innovations:

Certification Labels: The project finds that certification labels act like quality seals, signaling that an AI tool has been audited and meets standards for fairness and transparency. Almeida previous work shows that such labels increase users' willingness to rely on AI, particularly in domains where they’re normally skeptical. This is particularly timely, as the European Commission recently proposed a framework for developing trustworthy AI systems. However, while such frameworks set important standards, they do not clearly address how to communicate trust effectively to users. And while increasing trust is crucial, it also carries the risk of encouraging passive acceptance—so this approach goes a step further.

Critical Thinking Nudges: But trust alone isn’t enough. To avoid overreliance, the team proposes embedding nudges into the AI experience—simple prompts that encourage users to pause, reflect, and critically evaluate AI suggestions before accepting them.

This dual approach: trust-building and cognitive activation is designed to encourage appropriate reliance on AI, ensuring that public service managers use these tools wisely and effectively.

To test their model, the researchers conduct controlled experiments simulating hiring decisions in government settings. Participants are presented with hiring scenarios involving both human and AI-generated advice, some of which is intentionally flawed. These simulations help the team explore:

  • When users defer to AI versus human judgment
  • How certification labels shift trust and behavior
  • Whether nudges increase scrutiny of AI advice

 

From Lab to Government: Building an AI Hiring Advisor Tool

The project proposes to develop a digital tool that public service institutions can use to support hiring decisions. Informed by experimental data, this tool will incorporate certified AI algorithms and built-in nudges to help hiring managers make better, fairer decisions. Large, high-powered samples and pre-registration of experiments ensure scientific rigor and practical relevance. The ai hiring advisor tool is designed to be scalable, transparent, and aligned with emerging European AI governance frameworks.

Implication for the Public Sector

At stake is not just individual hiring decisions, but the future of government institutions.

  • For individuals, the project promises a more inclusive system where candidates are judged on merit, not stereotypes.
  • For organizations, better hiring means stronger teams, lower turnover, and fewer discrimination claims.
  • For society, equitable hiring practices can restore public trust and make institutions more representative of the communities they serve.

 

By joining AI with human-centered design and behavioral science, this project offers a roadmap for governments navigating the ethical use of technology in high-stakes decisions.

AI will not eliminate bias on its own. But with thoughtful design, clear communication, and a deep understanding of human psychology, it can help us make better decisions. The future of equitable hiring lies not in choosing between AI and human judgment, but in combining them thoughtfully.