You are using an outdated browser. Please upgrade your browser to improve your experience.

Our views 18 September 2025

How can we drive corporate accountability on ethical AI?

5 min read

Earlier this week, I had the privilege of joining fellow investors and experts at Sarasin & Partners’ Shaping Corporate Accountability on Ethical AI seminar.

Joining an investor panel to discuss one of the most pressing topics in responsible investment at the moment: how we, as investors, can drive corporate accountability on ethical artificial intelligence (AI)? The session was dynamic, practical, and deeply insightful, here are my reflections and some of the key takeaways.

Opening reflections: The real risks of AI

As the panel kicked off, we were invited to reflect on the most significant investment risks in AI, following an insightful fireside chat with two leading academics, Vasilios Mavroudis and Markus Anderjung. From a cybersecurity standpoint, it was reassuring to hear that many large companies we invest in are well positioned to manage the rise in cyber attacks using their existing practices and technologies.

However, when it comes to broader AI risks and governance, I shared that one of the most persistent and profound challenges is the ‘black box’ nature of AI systems. There is a common assumption among both investors and the public that those who build these models fully understand how they work. In reality, AI is not designed or taught in a way that guarantees complete transparency, even to its developers. For investors, this means we are navigating risks that are not yet fully understood. We are, in many ways, at a stage where we do not know what we do not know.

In reality, AI is not designed or taught in a way that guarantees complete transparency, even to its developers.

This lack of transparency is not just a technical concern. It has real human consequences. The tragic loss of Adam Raine has brought renewed attention to the risks posed by AI chatbots, particularly for young and vulnerable users. It also raises urgent questions about the responsibilities of technology companies to prevent harm. This was a key point in our discussion and a stark reminder that the impacts of AI are not abstract. The opaque nature of these systems underscores the need for greater transparency, accountability, and support for those affected by these technologies.

Engaging across the value chain

The investor panel discussion focused on where and how investors should direct their engagement efforts. I shared insights from Royal London Asset Management’s sustainable and ethical AI engagement programme, which highlights the importance of engaging both developers and deployers across the AI value chain. Developers set the technical standards, but deployers, those integrating AI into real-world applications, often have a less sophisticated understanding of AI risks. Yet, they can be among the most influential voices in holding developers accountable, especially when they demand transparency and ethical practices.

We believe that prioritising engagement with deployers and supporting them to become informed advocates for responsible AI is essential

At Royal London Asset Management, we believe that prioritising engagement with deployers and supporting them to become informed advocates for responsible AI is essential. Where direct engagement with companies is limited, policy advocacy and collaboration with NGOs and academia become powerful tools to drive industry-wide improvements. These efforts help protect stakeholders and advance ethical standards, even in areas where investor influence is constrained.

Risk management frameworks: ISO vs NIST

There are now several frameworks shaping how companies manage and disclose AI risks. The AI Risk Management Framework from the National Institute of Standards and Technology (NIST) is widely used, especially in North America and by global tech firms. Its flexibility allows companies to tailor risk management to their operations. But because it is voluntary and not certifiable, it can be applied superficially and makes comparisons across companies difficult. ISO 42001, the first certifiable international standard for AI management systems, is more prescriptive, with defined controls and alignment to the EU AI Act. While it signals maturity, it is resource-intensive, and certification alone does not ensure ethical or safe AI.

In our engagement programme at Royal London Asset Management, most companies use NIST as a baseline, with ISO adoption growing, particularly among those in high-risk sectors or those captured by the EU AI Act. These frameworks help investors benchmark practices and ask better questions, but they are not a substitute for deeper engagement. Disclosures can be shallow, and frameworks may lag behind emerging risks. That is why we use them as a starting point, always looking beyond the label to understand how AI is governed in practice.

Looking ahead

As AI continues to evolve, investor engagement will be critical in shaping responsible corporate behaviour and safeguarding societal interests. I left the session feeling energised and optimistic about the role we as investors can play in advancing ethical AI. By engaging thoughtfully across the value chain, advocating for robust risk management, and pushing for transparency, we can help shape a future where AI serves society responsibly.


Our voting, engagement, research, and advocacy activities are designed to be pragmatic, informed by evolving market insights and local best practices, and always aligned with the long-term interests of our clients. These activities aim to enhance the value and integrity of our investment decisions.

Please note that voting and engagement practices may not apply uniformly across all Royal London Asset Management funds or strategies, as each has distinct investment objectives. For details specific to your investment, please refer to the relevant fund prospectus.


For professional investors only. This material is not suitable for a retail audience. Capital at risk. This is a financial promotion and is not investment advice. Past performance is not a guide to future performance. The value of investments and any income from them may go down as well as up and is not guaranteed. Investors may not get back the amount invested. The views expressed are those of the Royal London Asset Management at the date of publication unless otherwise indicated, which are subject to change, and is not investment advice.