It's clear that many organizations can use generative artificial intelligence to help them with their ESG goals, but how can they best establish trust in this new technology?
The data challenges around environmental, social & governance (ESG) issues have been widely discussed, both in terms of the volume of data and the importance of data governance to mitigate legal risk.
These challenges, of course, are complicated by the ongoing facts that many different corporate reporting disclosure frameworks exist and that Excel is still the number one tool used in ESG data analysis, according to ESG strategy expert John Friedman. And now, artificial intelligence (AI) adds another layer of complexity, bring up trust and accuracy questions into the mix as many companies continue to face challenges of data quality on specific ESG issues.
Importance of trust
Trust is a fundamental element of AI. Humans must be able to safely trust the technology, and it is a good idea to put in the right guidelines and actions to garner such trust, even when all the answers currently are not known. Laying the groundwork for regulation is a key part of the solution to build trust, according to Kriti Sharma, Chief Product Officer of Legal Tech at Thomson Reuters.
In fact, Sharma advises that companies address the concerns of transparency, bias, and accuracy inherent in AI usage through a few critical steps, including:
-
-
- Communicating clearly when generative AI is being used, such as when a customer is not interacting with a human.
- Ensuring AI is trained using the best data to tackle issues related to accuracy. For example, Jonathan Ha, CEO of Seneca ESG described how his company is employing generative AI solutions to harness the guidance of expert practitioners who have hands-on experience in corporate ESG reporting to ensure the output is as accurate as possible.
- Establishing rigorous and constant testing through the design, development, and deployment of the technology to remove bias.
- Maintaining a human-centered approach to better understand the social implications of using AI.
-
In addition, learning in a safe environment is key. “Until the technology is used, no one can be evaluating what is legal, ethical, or appropriate use of that technology,” in part because it is all theoretical, Friedman said.
Sharma agreed, stressing that collaboration among businesses, innovators, and regulators in controlled environments in which innovators and businesses can test and development new technologies to better learn about different impacts is essential for AI regulation.
How to evaluate AI-assisted technology solutions
While there are meaningful positive benefits in the use of AI to confront ESG problems, it also brings in new questions around the ethics of use and demonstrates how companies need to evaluate AI-assisted solutions for ESG data management. In the short term, the extent of the need for ethics will vary and remain subjective because there is no existing standard.
For example, Friedman described how someone asked him his view of the ethics of using generative AI to write a company’s sustainability report. In response, he asked how using ChatGPT is any different ethically from using a professional outside writer to write a company’s sustainability report. Emphasizing the importance of the involvement of multiple people with ESG expertise across functions, Friedman reiterated that an organization still needs to look at what was created to make sure that it is accurate. Indeed, public-facing generative AI tools like ChatGPT have also demonstrated the ability for hallucinations — facts that are presented as accurate but have no basis in reality. In addition, individuals from an organization’s communications and legal teams still need to review the content and ensure that the organizational controls are being met in regard to gaining approval for the content of the document.
For companies, a good starting point is to create their own ethical rules of AI usage and investment, increase the quality of their data on every ESG issue as much as possible on a consistent basis, and when evaluating a technology solution, ask themselves, What happens to the data? and What is the propensity of misuse?
More specifically, Friedman recommends that companies document how ESG solutions with AI will be evaluated, making that a critical part of the governance framework. This process would include:
-
-
- ensuring the technology is fit for purpose. Some solutions may allow for bringing in the materiality assessment and other dimensions, which is useful, but at some point, can cloud the issue. Focus narrowly on what functionality you need the tool to provide right now.
- understanding how the system is built. This will ensure that the system is built robustly, using the latest guidance for reporting, and analyzing the solution providers’ past track record for building solutions that address previous regulatory requirements. Essentially, it is imperative for the potential solution providers to demonstrate that they understand data collection and data aggregation. The involves outlining who is going to see the data, for what the data is going to be used, and how it is going to be used. If the system is going to be processing proprietary data — for example, data from suppliers — it is critical to know the capability within the system to safeguard their data.
-
Generative AI offers many positive opportunities to bring about efficiencies in ESG data management workflow, but it also raises new questions. To answer these questions, solutions providers need to prioritize the concerns around accuracy, bias, and trust by revealing proactively what they are doing to address them.
At the same time, innovators, corporate data disclosers, and regulators need to collaborate to test outcomes from potential new regulation; while companies need to create their own ethics around AI and document robust evaluation methodologies for their ESG solutions as part of their overall governance structure.