By Damien Philippon, Founder, Zelros

Getting a job is already hard enough. And for the 1 in 4 Americans who suffer from a disability, securing one can be even harder. This shouldn’t be the case though because of The Americans with Disabilities Act (ADA) that came into law in 1990. This civil rights law prohibits discrimination against those with disabilities in all areas of public life, including jobs, schools, transportation, and all places that the general public would have access to.

So if this is the case, why did the U.S. Justice Department and the Equal Employment Opportunity Commission jointly issue guidance to employers to take due diligence before utilizing popular Artificial Intelligence (AI) tools to vet job applicants? Is biased AI actually the reason why disabled Americans can’t seem to land a job? The possibility that biased AI can unfairly discriminate against people with disabilities should be sending a warning to employers that the blind reliance on these tools could violate civil rights laws.

This isn’t the first time that we’ve seen reports of biased AI. Because it extends to a variety of vulnerable groups outside of those individuals with disabilities. Biased AI algorithms can discriminate against a whole host of people, including people of color, women, different age groups and more. Thankfully, there are solutions to this, and more companies are focusing on ethical and responsible use of AI.

What is contributing to bias in AI?

Biases can occur as a result of incomplete data or misrepresentation in the AI design and development that can lead to unethical AI. If a team lacks the diversity of different viewpoints, thought processes and life experiences, inaccurate representation can go undetected. Leading to potentially unequal recommendations and other outputs.

What often comes to mind when we think of bias in AI is the result of preferences or exclusions in training data. However, bias can also be introduced by how data is obtained, how algorithms are designed, and how AI solutions are interpreted.

In our new world with more people working remotely than ever, if you’re only collecting data for a survey on company culture by those who physically work in the office and excluding the percentage who work from home and other locations, the data will be biased with information from those in the office.

When designing a system, who is in the room matters. The different thought processes and life experiences overlap together to catch discrepancies that would otherwise go undetected had the engineers and programmers involved had the same background and experiences. This also ties into directly how the solutions can be interpreted, because without a system in place with differing views and expertise, corrupt data can and will go undetected.

Looking at an example from the workplace environment, corrupt data that might get through to being a solution could interpret that on-the-job accommodations like a quiet workstation for someone with post-traumatic stress disorder or taking more frequent breaks for a pregnant woman as undesirable traits on an application. Even though two applicants might be the same education and experience wise, the bias will lean towards the applicant without the disability. These differences shouldn’t be deal breakers when the AI comes to its solution, these are accommodations protected under law that enable employees to modify their work conditions to perform their jobs successfully.

What is a possible solution?

One action that can be taken is called a bias bounty. One way to detect biases and discrimination in AI is to use bias bounties to catch bad data, avoiding further deviation of the analytics. Bias bounties are implemented to reward users for identifying bias in AI systems before they become civil rights violations.”

Bring humans back into the equation. Yes, having Siri on our phones and Alexa in our homes is nice when it comes to productivity. But relying solely on AI and machine learning (ML) when it comes to who’s the best applicant for a job or what is the best coverage for an individual is a recipe for disaster if there is a lack of representation in the design stage. Furthermore, a human who is able to intervene in the system to catch red flags before they become outputs can keep employers out of controversial headlines and those who need to support the families, the ability to do so without fear of their disability being an area holding them back.

What’s another way to ensure your AI is ethical and responsible?

As technology continues to advance, especially in the areas of artificial intelligence and machine learning, there is a need to have wider representation directly involved with building this technology. The more diverse the team of programmers, you will have less bias in the system you develop. Right now, eliminating bias in AI is a very hot topic, and the best way to facilitate this is to have diversity, different cultures and more women involved from the beginning.

Diversify, diversify, diversify, it cannot be stressed enough. Yes, having any human presence while conducting AI and ML solutions is better than none, but the best would be to have a diverse and capable team. Right now, a study published by the World Economic Forum found that only 22% of AI professionals across the world are female, compared to the 78% who are male.

When looking at race, Black women make up only 1.7% of those in the tech workforce overall, according to a 2021 report from AnitaB.org. And when looking at Black professionals as a whole, they only account for 7.4% of the tech workforce. If companies want to remain relevant in their industries, they need to be not only embracing but celebrating diversity in their organizations. Whether it be sex, race, age or religion.

Creating technology that everyone is going to be interacting with directly and indirectly should not be left up to a select few. These systems are making recommendations and decisions on our behalf, we (leaders and technology leaders) have a responsibility to society to make it a better place, and not amplify the cultural and societal issues we already have.

At Zelros, we provide an AI-driven product recommendation for insurance and with that, we ensure that there is a responsible AI component to our platform. Zelros Responsible AI helps monitor, detect, alert and correct any unintentional biases that occur in the algorithm. We encourage all organizations to build and sustain a Responsible AI governance program to track and report on how they utilize AI and ML, before there is a bigger unintended negative impact in society.

Together, we can make a world that is not only full of technological wonders, but accessible ones that are inclusive of everyone.

About the Author

Biased Artificial Intelligence Is Costing People Job Opportunities, And Much MoreDamien Philippon is a founder of Zelros. Damien Philippon has over 20 years of experience in IT and digital software in several  countries around the globe. He spent 10 years building a strong technical IT background at a leading systems integrator, leading complex IT programs such as CRM, ERP, and outsourcing programs. He co-founded a management consulting company where he spent six years, learning to be an entrepreneur and learning above all that tech is 90 percent about people. He co-founded Zelros six years ago because he believes that artificial intelligence will help turn the insurance industry into a more customer-centric industry. Thinking about the challenges our planet faces, including climate change and pandemics, he believes Insurance is a key industry to absorb these shocks and protect our lives. He is based in Montreal Quebec.

Damien Philippon can be reached online at https://www.zelros.com/contact-2/ and at our company website https://www.zelros.com/

Source: www.cyberdefensemagazine.com