Aug 28, 2024 | AI
Responsible AI implementation starts with human-in-the-loop oversight
A guest post from Carter Cousineau, vice president of Responsible AI and Data, Thomson Reuters
As vice president of Responsible AI and Data at Thomson Reuters, I’ve had a front-row seat to the rapid evolution of artificial intelligence (AI), particularly in the realm of generative AI (GenAI). The past year and a half has seen unprecedented advancements in this field, with the allure and potential capabilities of large language models (LLMs) taking centre stage for business leaders and society at large.
GenAI technology has proven its readiness, but pressing questions now revolve around ethics, and governance. As companies rush to implement AI solutions, it’s vital we consider how we implement AI.
Keeping humans in the loop is a key aspect to our strategy employed in our responsible AI practice at Thomson Reuters. Human-in-the-loop is critical at every stage: design, development, and deployment. For instance, we require developers and product owners to include the details of their human oversight process in the model documentation template. Post-deployment, teams are dedicated to human-in-the-loop training and monitoring.
Human involvement is also crucial when considering the AI models performance, including tracking model drift and overall performance metrics. We have hundreds of advanced subject matter experts review the outputs of our AI systems. User feedback, gathered through human reviewers, helps ensure that our models continue to perform as intended.
Beyond improving the technical aspects of AI systems, keeping humans in the loop serves another vital purpose: it reassures our workforce that they remain critical to the company’s success. As we navigate the AI future, it’s important to remember that every technological advancement throughout history has required us to strike a balance between human skills and new tools. AI is no different. By keeping humans in the loop, we can leverage the strengths of both human intelligence and AI, optimizing our processes while maintaining the critical elements of human judgment, creativity, and ethical oversight.
It’s not just about making our AI systems more accurate or reliable, it’s about creating a future where technology enhances human capabilities rather than replace them. That’s the kind of responsible AI implementation we should all strive for.
For more on the importance of responsible AI and human-in-the-loop review, check out my interviews in Datanami and Accounting Today (subscription required).