#4/6. The Alignment Problem: When Smart Systems Go Off Track
AI may optimize for the wrong thing if we don’t align it with human values. In global health, misalignment risks reinforcing inequity and exclusion.
In global health, we often assume that smarter tools will lead to better outcomes. But what if intelligence isn’t enough? The AI alignment problem, originally raised by researchers concerned with advanced AI systems, asks a simple but crucial question: How do we ensure AI does what we want it to do, not just what we tell it to do? If the goals we give an AI system aren’t carefully designed to reflect real-world human values, it may optimize in ways that are technically correct but ethically or socially disastrous.
That’s not theoretical. In global health, misalignment is already happening.
· An algorithm designed to “maximize efficiency” might divert resources from hard-to-reach communities or chronic diseases because they’re deemed less cost-effective.
· A chatbot offering HIV advice might provide technically accurate guidance that endangers a user in a criminalized setting.
· A predictive model trained in Europe might produce biased results when deployed in Nairobi, Manila, or rural Bangladesh.
And if those systems are built by outside actors, with little local involvement or oversight? That smacks of digital colonialism as well as misalignment. The takeaway is clear: AI will not automatically serve global health goals unless we design it to. Alignment requires more than just technical tuning. It demands local co-design, governance capacity, transparent evaluation, and maybe even incentives that prioritize equity over efficiency (would love your thoughts on what those incentives might be). I worry that if we get this wrong, the very tools meant to expand access and improve care could entrench harm. But if we get it right, AI can be an extraordinary force for justice.