When I was a kid I always used tell my sister that our grandma was biased towards her, and I hated that. The same applies at work. We won’t like if our manager is biased towards certain individuals ( and if that’s not you :)). And that’s human virtue. But, then what about the biases exhibited by powerful frontier AI models ? I was recently listening to Hard Fork podcast on ‘AI Action Plan’ and was intrigued by the discussion around how do we tackle the problem of biasness in the AI models. For instance, the model low-balling salary expectations for women and minorities.

If we consider, these AI models will keep exhibiting bias in some form or the other, then are those trustworthy to make decisions based on those ? AI Agents are exactly doing (or atleast they are positioned to do) that for us. To implement some automated task, these agents would consult the (potentially biased) model and based on the response make some decision and take action on your behalf. Will that make you a biased person ?

For instance, would you trust AI agents to make the salary negotiations for you ? Or consider, you moved to a new city and you ask AI agents to find you a doctor and make an appointment, would it always prioritize the doctor with certain race or gender ?

One argument could be that, biases are inherent to human nature, why they should’t be reflected in AI models ? IMO, as humans we try to be consicous and overcome biases when making important decisions. How do we ensure AI models follow that ?