Responsible AI showdown: Earning trust in the GenAI revolution
Uncover valuable insights from Sitecore’s AI panel discussion with Microsoft.
4 minute read
Uncover valuable insights from Sitecore’s AI panel discussion with Microsoft.
4 minute read
On this page
Round one of the quiz focused on the ‘people’ pillar of AI. The audience was presented with a series of images and asked to vote via a QR code whether they thought the images were real or AI generated.
A discussion unfolded on how to ensure ethical usage by creators while balancing innovation and creativity. Sitecore’s Zach Escabedo made the case for having an AI council or legal overview that sets the standards throughout an organization. Microsoft’s Noel Pennington discussed Microsoft’s framework which is based on fairness, reliability, safety, privacy, security, inclusiveness, transparency, and accountability. Referencing an example of AI bias encountered by one of his colleagues, Pennington spoke about the need for organizations to train people before technology. Sitecore’s Adeline Ashley echoed this sentiment, saying that an ethical mindset is essential for the people who train AI models.
Round two explored the ‘process’ pillar of AI. To introduce the topic, the audience was asked to vote on the following question: “In a world where diversity and inclusion are key, do you think AI can distinguish between male and female?”
Addressing how it is possible to maintain ethical accuracy when AI content evolves over time and in different contexts, Escabedo pointed to brands that use a Large Language Model (LLM) system and how they need to contextualize their business within that dataset. He said solutions such as Sitecore Stream make it possible to add different brand kits, enabling companies and organizations to ensure accuracy over time. As a use case, he cited brands that acquire new companies or products and the imperative to feed the model specific data relating to the acquisition. The panel also discussed the role of temporal references in ensuring accuracy and how generative AI requires a constant feed of new data to evolve over time.
Round three delved into the ‘governance’ pillar of AI and began with the audience being asked to vote on the topic of whether we should have fully autonomous AI.
The panel then discussed the idea of whether autonomous AI should exist after it is built. Escabedo argued that in certain contexts, AI should–and can be autonomous, saying that a decision model or AI model that can decide on your behalf is extremely important. He gave an example of an interactive chatbot experience that uses a Large Language Model conversing with a user. While this personalized experience is a ‘closed’ scenario that uses guardrails to ensure everything operates as it should, he highlighted the possible pitfalls of employing generative AI without human oversight.
Round four explored the ‘transparency’ pillar of AI. As an introduction to the topic, the audience was asked to vote whether they thought product recommendations systems are generative AI or traditional AI.
Referencing solutions used by ecommerce retailers, the panel defined product recommendation systems as traditional AI, explaining that they learn over time by leveraging the user’s browsing behavior and purchasing history.
In the second challenge in this round, the audience was asked whether product design ideation is generative AI or traditional AI–with Escabedo and Pennington weighing in–and categorizing the process as a form of generative AI rather than traditional AI.
The session concluded with the panel discussing the difference between generative AI and traditional AI and the advantages and disadvantages of both approaches. Pennington drew on the example of a supermarket using traditional AI to target customers with relevant offers based on their buying behavior– and the potential privacy and transparency issues related to this approach. Meanwhile, on the topic of generative AI and transparency, Escabedo acknowledged the benefits for marketers– saying the ability to identify where data is coming from and the transparency of the process gives control back to the marketer.
As established by the panel during this insightful session, the path to responsible AI requires developing and deploying AI systems that are ethical, transparent, and fair. To earn trust, brands and organizations must ensure data privacy, minimize biases, and maintain accountability throughout their processes. By prioritizing these key principles, brands and organizations can ensure their AI efforts deliver a positive impact, while fostering confidence and trust among users.
To learn more about responsible AI, read Microsoft’s Responsible AI Transparency Report.