Artificial intelligence (AI) offers numerous benefits, but training with robust data is essential to developing quality AI systems. Our underwater database assists in developing recognition systems for ocean/sea search and rescue operations, our dental database helps train new dentists in accurate diagnosis of dental diseases, and our facial recognition database assists law enforcement in the prevention and recovery of victims of human trafficking. AI has the power to save lives and transform our world.
But such power also has its limitations. Often, we use AI without questioning if it’s impartial or accurate. Through my work, I’ve learned that AI does have limitations and they must be monitored, corrected, and actively changed.
Most of today’s facial recognition software is trained by examining thousands of thermal or RGB images from security cameras. As my team set out to create a database to recognize criminals and victims of human trafficking, I discovered these databases primarily contained people of colour and males.
The facial recognition software assisting in security and threat assessment identifies people of colour, more than other groups, as risks. This is because, over time, AI software generalizes the traits of people in its database and applies them to new images. The software was trained with a database that lacked diversity. As a result, men of colour are more likely to be singled out and interrogated more often than anyone else.
AI software also commonly shows bias while screening resumes. The software learns what to look for by examining a database of past hires. If this database is not diverse, future hires will look exactly the same as those hired before. In many cases, job candidates of colour and women are eliminated immediately because the AI was trained with a database composed mainly of candidates from the same cohort of brand name institutions, who also happened to be male.
Related Stories |
Big data is useless without visual analytics
|
Start preparing for the economic impact of Artificial Intelligence
|
The AI tipping point much closer than you think |
Human resources departments regularly use AI in their performance reviews of employees. In my field of academia, women are passed over for promotions time after time. People claim these decisions are unbiased because they are made by impartial AI technology, but if a woman has never been department chair before, the AI software will not recognize a woman as a potentially successful candidate, ignoring the fact that a departmental culture that has prevented a woman from ever being in that role compounds the bias.
I see people placing blind faith in AI technology, but it is critical for people who use this software to understand its limitations. If one can’t explain how an AI makes its decisions and the user doesn’t understand the context of the data use, AI shouldn’t be used to make decisions that affect people’s health or livelihood. AI should do no harm.
I learned at a young age to take a software’s recommendations with a grain of salt. My best friend and I took a high school career assessment program together. Since we both enjoyed math and science, we expected similar career suggestions. The software told him he could be a professor or engineer — it told me to be a chef or cosmetics saleswoman. When I asked my guidance counsellor about the results, I was told, “Computers don’t make mistakes.”
Obviously, the programmer who coded that career software embedded specific careers for women and others for men — an example of AI software with built-in bias. Today, AI technology is more complicated but still trained by programmers. If AI is trained with biased data, it delivers biased decisions.
AI acquires its bias from public opinion. I work to change the bias people encounter in AI software, but it has become my lifelong mission to work even harder to transform bias in the people around me.
I can’t count the times I’ve been told, “But you don’t look like an engineer.” To that, I say, “What does an engineer look like?” I may not look like it, but I was the first woman to work my way up from visiting to tenured professor in the Electrical and Computer Engineering Department at Tufts University.
To change bias in AI software, we need to have more diversity in the data used in training AI databases and standards similar to an “FDA approval for AI”. Increasing women and people of color in Science, Technology, Engineering and Mathematics in leadership positions and holding current leaders responsible for the implications of technology are two steps in the right direction. Technology, such as AI, is changing our world, but we must ensure we’re changing it for the better.
Dr. Karen Panetta is a Fellow of the National Academy of Inventors, Dean of Graduate Education for the School of Engineering at Tufts University, and Nerd Girls founder.
For interview requests, click here.
The opinions expressed by our columnists and contributors are theirs alone and do not inherently or expressly reflect the views of our publication.
© Troy Media
Troy Media is an editorial content provider to media outlets and its own hosted community news outlets across Canada.