AI ethics

An ethical artificial intelligence system is one that is explainable, inclusive, responsible, transparent, and secure.

4 minute read

Girl using VR glasses, lights around her
AI Summary

     – BF Skinner It’s critical for companies that use artificial intelligence and machine learning to develop ethical standards for their use across the business, user journey, and content marketing lifecycle. The ethical issues that can come with the use of AI can be addressed with a combination of AI regulations, ethical AI principles, and responsible AI philosophies. An ethical AI system is explainable, inclusive, responsible, transparent, and secure. With bias issues clearly documented, accounting for them at least and eliminating the opportunities for bias through diverse human oversight is the responsibility of every brand that uses artificial intelligence. The future of AI is intertwined with the ethical challenges of data privacy and data protection, and brand policymakers who address these concerns and create initiatives that support data privacy are likely to gain a competitive advantage in the future.

What is ethical AI?

The matrix of artificial intelligence systems and tools is growing by the day, and the potential for AI to augment human intelligence is incredible.

“The real problem is not whether machines think but whether men do.”

     – BF Skinner

It’s critical for companies that use artificial intelligence and machine learning to develop ethical standards for their use across the business, user journey, and content marketing lifecycle.

The ethical issues that can come with the use of AI can be addressed with a combination of AI regulations, ethical AI principles, and responsible AI philosophies.

An ethical AI system is explainable, inclusive, responsible, transparent, and secure.

Explainable AI (XAI)

With only 9% of Americans thinking that computers with artificial intelligence would do more good than harm to society, it seems critical to prioritize human understanding of the impact, accuracy, outcomes, and biases of AI models.

Explainable AI is a specific, philosophical approach to AI systems that helps users have confidence in the results of machine learning algorithms. When it comes to building trust and confidence among those using AI models, explainable AI can also help companies and teams craft a responsible approach to the development and integration of AI across their organizations.

Being able to thoroughly explain how a tool works and how it achieves specific results is best practice for any technology and is especially critical with cutting-edge AI systems. Building a level of explainability into the deployment of the technology can also ensure that the operation of the technology is in line with company policy, external regulation, and brand values.

Inclusive and bias free

Inclusivity in AI systems means considering all humans equally; taking this approach from the beginning can help prevent the unintentional exclusion of certain groups.

“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included.”

     - Kate Crawford

Avoiding bias in AI systems is a critical pursuit, but not as easy as it may first appear. In 1996, Batya Friedman and Helen Nissenbaum identified three categories of bias in computer systems and though these were introduced nearly 30 years ago, they remain as relevant as ever.

  1. Pre-existing bias, which has its roots in the practices and attitudes of society and individuals within that society. Pre-existing biases can, of course, be introduced deliberately but are often included unconsciously.
     
    This same concept is sometimes called ‘data bias’ when it comes to AI systems; Al systems are powered by data, and biases in those data sets will be reflected in the operation of the system itself. These biases have often surfaced when it comes to race and gender.
     
    Voice recognition tools from multiple companies, including Apple and IBM, have been shown to have higher error rates when processing black voices, and in a perfect example of sample selection bias, Amazon famously discontinued its use of AI hiring because the algorithm favored men.
     
    In both situations, the issue is with inherent bias in the historical data used. Amazon used 10 years of internal data to train their AI recruiting tool and most of the successful candidates in that data set were men, so the algorithm learned to penalize resumes from women.

  2. Technical bias is often a result of the software and hardware being used to design the algorithm (such as a search engine pushing lower ranked results off-screen because there is simply no room for them). It can also be when designers attempt to quantify concepts that are deeply qualitative to most humans; an algorithm that is meant to define ‘attractiveness’ is one example, because placing a specific value on such a subjective measurement is bound to create problems.

  3. Emergent bias, which develops due to the interaction between technology and users of that technology. The most famous example of this in recent years is the chatbot Tay, which was meant to learn from interacting with other users of a specific platform. Unfortunately, the platform selected was Twitter (now known as X), and the users that the chatbot was interacting with began feeding the bot inflammatory content, exploiting a technological weakness of the bot. Within a day the chatbot was sending out offensive messages.

With bias issues clearly documented, accounting for them at least and eliminating the opportunities for bias through diverse human oversight is the responsibility of every brand that uses artificial intelligence.

Responsible use of AI

AI is a tool, like any other, and as such requires safeguards and checkpoints to be certain that AI is being used legally and correctly. In addition to the bias discussed above, AI has been used to spread misinformation. It has been used to create deep fakes, and some models have been trained on copyrighted imagery and text.

Class action lawsuits have been filed against OpenAI alleging that the technology ‘relied on harvesting mass quantities’ of words that are under copyright, AI art generator Stable Diffusion is likewise being sued by Getty Images for copyright infringement, and generative AI companies Stability AI, Midjourney, and DeviantArt face similar challenges.

Internal accountability should be a part of AI ethics. Asking questions about how the AI systems being used are trained and where the data comes from can help companies ensure that the use of AI is responsible and in line with brand values.

It’s also critical to have diverse viewpoints as part of this internal process; the more diverse the field of viewpoints is, the more likely a team is to identify bias and underlying safety issues, and the more likely they are to spot vulnerabilities and incorrect information provided by AI tools.

"The key to artificial intelligence has always been the representation.”

     - Jeff Hawkins

In many ways, forewarned is forearmed. Keeping abreast of the latest developments in AI technology and exploring new tools, be it on genetic research, climate change, or scientific research, while also acknowledging the underlying issues and concerns that come with developmental technology, can go a long way toward ensuring that AI use is responsible and that the brand is comfortable taking accountability for AI results used across its content lifecycle and technological ecosystem.

Transparency with customers

Being transparent about where and how AI is used can go a long way toward addressing dilemmas, and building trust with customers, visitors, and employees. Being honest with people about where content comes from and how AI is used in the process of creating it is an important piece of any ethical AI framework.

Noting which articles, social media posts, and blogs were written with the help of ChatGPT or similar AI applications, acknowledging when images are generated using AI technologies, and being clear about the threshold at which a human enters the chat and takes over from a chatbot can make the boundaries between AI and humans clearer and help create trustworthy AI frameworks.

“Public trust is a vital condition for artificial intelligence to be used productively.”

     - Mark Walport

This is another place where a committee to oversee AI use within a brand can be useful; establishing ethical AI standards for stakeholders to use in decision-making across departments and standardizing notification methods to create well-defined and attainable standards can help avoid misunderstandings and criticisms down the road.

Secure the privacy of user data

As with any system that uses big data, the privacy of the data holders and the security of the data itself should be paramount in any use of AI technology. This is especially critical as AI expands into sensitive fields like finance and healthcare.

Preventing unauthorized access to databases and complying with laws like the European GDPR is an essential best practice that extends to and encompasses the use of AI systems. The future of AI is intertwined with the ethical challenges of data privacy and data protection, and brand policymakers who address these concerns and create initiatives that support data privacy are likely to gain a competitive advantage in the future.

Ethical AI adoption is up to us

The ethics of AI is a multifaceted discipline, one that brings together considerations from human rights, and the societal impact of robotics, computer science, and information technology.

While there will always be ethical questions around AI, brands that incorporate an ethical framework into their AI development, ethical guidelines into the use of automation, and ethical principles into their use of AI from the beginning can incorporate these new technologies in a trustworthy way that addresses ethical concerns.

You may also like