These are top 10 stocks traded on the Robinhood UK platform in July
LONDON - Insig AI plc (AIM:INSG), a data science and machine learning company, in collaboration with Falcon Windsor, has released a research paper addressing the responsible use of artificial intelligence (AI) in corporate reporting. The study, titled ’Your Precocious Intern,’ examines the implications of AI’s increasing presence in the financial reporting landscape and offers a model for its ethical application.
The research, which involved 40 FTSE companies and analyzed reports from the FTSE 350 spanning 2020 to 2024, found that the use of generative AI is on the rise among UK companies. However, it often lacks the necessary training, policy, or oversight. Investors are increasingly concerned about the potential for AI to impact the truthfulness and authorship of corporate disclosures.
Diana Rose, Head of ESG Solutions at Insig AI, emphasized the importance of responsible AI use aligning with the company’s mission. She hopes the report will spark discussion and guide those involved in corporate reporting on leveraging AI without compromising information trustworthiness.
Richard Bernstein, CEO of Insig AI, noted the transformative potential of generative AI in business, akin to the internet revolution, urging companies to understand and adopt AI responsibly. Claire Bodanis, Founder and Director of Falcon Windsor, echoed the sentiment, advocating for careful AI use to avoid jeopardizing reporting accuracy and truthfulness.
The report highlights that while AI mentions in FTSE 350 annual reports have more than doubled from 2021 to 2024, none yet refer to AI in relation to the reporting process. This suggests an opportunity to develop a practical model for AI use. Investors are open to AI for handling large volumes of data but insist that the human voice remains central to reporting. They call for companies to clearly state AI’s role in their reporting processes.
The study emphasizes the urgency of addressing AI in reporting, as general usage is still low, providing a narrow window to implement AI thoughtfully with appropriate checks and balances. The authors do not seek increased regulation but suggest that guidance from bodies like the Financial Reporting Council or the FCA would be beneficial.
The report recommends treating generative AI like a ’precocious intern’—bright and capable, yet inexperienced and prone to overconfidence. It should not be left unsupervised, and its work should be checked. Companies are advised to introduce formal AI training, establish clear governance guidelines, and disclose AI usage in reporting to maintain integrity and trust.
The research, developed over 15 months with input from Imperial College London and the UK’s Chartered Governance Institute, involved a quantitative review of over 21,000 corporate documents and qualitative feedback from focus groups with institutional investors and company representatives.
This article is based on a press release statement from Insig AI and Falcon Windsor.
This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.