This weekend, I tried out DALL-e mini hosted by Hugging Face. It's an AI model that generates images from any word prompt. Every image it generated for "expert" "data scientist" "computer scientist" showed some distorted version of a white male.
This highlights one of the many issues with AI systems, which is replicating and amplifying biases embedded in their training datasets. We have a lot of work ahead of us to change the biases embedded in these systems but we also need to change our society's stereotypical notions of who is an expert in AI and other technical fields.