Mapping a Route to Equitable AI: Why Anthropic Design Matters

Image by Gerd Altmann from Pixabay
Photo : Gerd Altmann from Pixabay

Whether you work in the tech industry, manufacturing, retail or farming, artificial intelligence is on-hand to make a difference to your working life. From speeding up quality checking on the factory floor to automating time-consuming business processes, we can find solutions for a wide range of commercial challenges.

Alan Turing, widely considered the father of artificial intelligence, once said "Those ideas that are most worthwhile and challenging are often those that at first make us uncomfortable", which should be a reminder to ask difficult questions and hard truths regarding AI.

With Turing's words in mind, we must ask if there are any limitations to this technology and what might we need to make considerations for as business owners? Currently, one of the largest question marks over AI is its approach to equality, accountability and inclusivity.

Of course, gender inequality is not just a social issue but also imperative for business, particularly as we rely more and more on artificial intelligence. Here are some of the reasons why it's essential to pay close attention to the integration of AI systems into our processes and why anthropic design matters.

Building trustworthy and fair AI

Artificial intelligence systems are only as equitable and unbiased as the data used to train them, therefore, as AI becomes increasingly ubiquitous, 'anthropic design' - creating AI with equity, accountability, fairness and inclusiveness in mind - is essential.

If AI systems reflect and amplify the prejudices of their developers (whether inadvertently or deliberately) they will alienate huge segments of the population, reduce the accuracy and efficacy of tools, and invite legal and public scrutiny. Mark Suzman, CEO and board member of the Bill & Melinda Gates Foundation, says of AI, "Potential gains will only be realized if the technology is implemented with the beneficiaries participating in its development".

Companies that fail to adopt an anthropic approach to using and adopting AI risk perpetuating discrimination and limiting their growth potential if they are not seen as forward-thinking by customers and potential employees.

This is why business leaders must make developing equitable, trustworthy AI a strategic priority. Putting anthropic design into practice mitigates bias, avoids unfair outcomes, and builds AI that serves the whole of humanity.

Addressing the roots of bias in AI

AI systems are trained on data collected and labeled by humans. If the teams building AI do not reflect the diversity of the populations they serve, the AI cannot account for the range of human experiences and will inevitably reflect and amplify prejudice.

Furthermore, Justin Aldridge, Technical Director at Artemis Marketing, a leading digital marketing and SEO agency, notes that, "AI chatbots suffer from hallucinations, which Wikipedia describes as "a confident response by an AI that does not seem to be justified by its training data". 

Adding, "Basically, it makes things up! We've seen this quite often during testing. The AI will create fake quotes, fake businesses and link to resources that don't exist." With its penchant for going off script, any inbuilt preconceptions could be further amplified.

Business leaders must recognise that AI systems will reflect the prejudices and blind spots of their development process. Achieving equitable AI will require including and amplifying marginalized voices often left out of the design process. For example, as of 2022, women made up just 22% of the tech workforce.

Promoting an inclusive future for AI

While AI's initial behavior is defined by humans, its patterns can be retrained which means it can be changed and improved. However, mitigating bias is once more limited by the diversity of practitioners. If an AI is to be retrained by another person from the same demographic as its original creator, then any biases unintentionally built in may once again be overlooked.

Promoting diversity and inclusion in tech education and recruitment is key to developing AI that is equitable. However, there are substantial challenges to overcome, including unfair hiring practices, limited access to education, and discouraging workplaces. Strategies like targeted recruiting, skills training, and mentorship are needed to open up opportunities in tech and support people from underrepresented groups to thrive.

Diverse teams are better equipped to consider how tools and systems might affect the experiences of marginalized groups, and more capable of designing AI that does not discriminate unfairly.

In some cases, AI can also be used to help identify and mitigate its own biases, with techniques like model self-supervision that detects unfairness in predictions. But human judgment is still required. AI users are already learning to take a more scientific approach to AI, working out how to prevent tech advances stifling human's emotional and personal input. Likewise, everyone must determine appropriate corrections and evaluate their effectiveness.

Equitable AI for hiring

Using AI for hiring provides a salient example of how the technology can perpetuate discrimination if not designed with an anthropic approach. One example was Amazon's AI recruiting tool that showed a bias against women and continues to come under fire. AI hiring tools trained on biased data or built without considering fairness will replicate the unequal hiring patterns and prejudices of the past. Companies developing AI for hiring must prioritize inclusive design and consider how to overcome historical barriers and biases.

For example, if an AI hiring system is trained primarily on data from male candidates and employees, it may reflect a preference for supposed masculine traits and unfairly downgrade female applicants. The system might also amplify prejudices from unfair historical practices by learning that certain groups received more job offers in the past. Without adjustments to address this, the new system may replicate old patterns of discrimination.

However, with inclusive design, AI can help improve hiring fairness and accessibility. Then, AI can be used to scale inclusive practices, expand outreach to underrepresented groups, and help evaluate candidates on skills alone by stripping away personal details.

One of the biggest challenges with AI's design lies in ensuring its accountability, and inclusivity. Continuous monitoring through randomized audits on AI hiring system decisions will help identify new sources of unfairness as they emerge. 

With AI sometimes generating information or making associations that aren't based on its training data, misleading or incorrect outputs are possible. In addition, AI systems can only reflect and amplify those biases, potentially alienating huge segments of the population. But companies developing these technologies must make addressing unfairness and building trustworthy AI an utmost priority.

© 2024 iTech Post All rights reserved. Do not reproduce without permission.
* This is a contributed article and this content does not necessarily represent the views of itechpost.com

Tags

More from iTechPost