Why a Little Imperfection is Just What We Need
AI’s output alone isn’t what really matters; it’s how we humans choose to use it. AI can churn out answers, recommendations, and solutions, but the real value is how we interact with those outputs. In Tim Harford’s example, researchers conducted an experiment where recruiters used AI to help screen resumes. Some recruiters had a “good AI” tool, highly accurate, spot-on recommendations. Others had a “bad AI” tool, a less accurate, imperfect version. Surprisingly, those with “bad AI” made better hiring decisions because they engaged more deeply with each resume, actively weighing the AI’s input against their own judgment. Those with “good AI”? They essentially went on autopilot, blindly trusting the tool without thinking critically.
AI is often designed with the ideal, rational user in mind, but real life isn’t so straightforward. When we get too comfortable, we start rubber-stamping AI’s decisions instead of using our own judgment. That’s exactly what happened with the recruiters. When the AI seemed highly accurate, they stopped questioning it, trusting its recommendations instead of staying engaged. But the ones using “bad AI” knew it was unreliable, so they remained more alert, catching mistakes and making better decisions.
This is why design thinking—a people-centered approach to creating products—is vital in AI development. Rather than creating an AI that works in perfect conditions for ideal users, design thinking emphasizes building AI that aligns with real human behavior. AI should be designed to solve specific problems for specific users. And sometimes, that means introducing just enough imperfection to keep us involved in the process. As we saw with the resume-screening study, imperfect AI encouraged critical thinking, which is exactly what you want when making decisions as important as hiring.
Sometimes, we need AI to be a little less perfect so we stay actively involved. Perfect AI can lull us into a “tick and flick” mentality, where we mindlessly approve whatever the algorithm spits out. Imperfect AI keeps us on our toes, prompting us to evaluate suggestions and apply our own judgment. In the case of the recruiters, “bad AI” actually led to better decisions, precisely because it kept people from going into autopilot mode.
In a world of fast-evolving AI, let’s appreciate the benefits of a little imperfection. Building AI that doesn’t just give perfect answers but also nudges us to think critically and stay engaged can help us avoid the traps of over-reliance. “Bad AI” might be exactly what we need to keep our minds sharp, our decisions thoughtful, and our reliance on technology balanced.