- 11 November, 2024
Following the launch of the Balderton Founder’s Guide to AI Policy and Regulation, our Head of Impact and ESG Elodie Broad shares her insights on the importance of building responsibly from the get-go.
The idea of Balderton’s Founder’s Guide to AI policy and Regulation, which launched this month, stemmed from a desire to help founders navigate the fast-evolving, highly fragmented, and intrinsically ambiguous AI regulatory landscape.
While navigating this landscape might feel rather daunting and complex, there is one vital shared goal: ensuring that AI systems are deployed safely and securely, and respectful of human rights and values.
Fairness, ethics, transparency and safety have always been considerations for regulators when deploying science and technology to improve lives and societies.
Sarah Gates Director of Public Policy, Wayve
Technology runs fast. It is developed quickly, and if successful spreads rapidly. The ubiquity of Generative AI is a testament to that. Regulation – with all the complexity and bureaucracy that it entails – will therefore almost always be playing catch-up. In our conversations with founders and experts when building the guide, one key piece of advice for founders was clear: build and use AI products responsibly from day zero, and every day thereafter. Whether it’s building an AI business, or leveraging AI as part of your operations, setting (and observing) high standards of ethics and governance from the very beginning should keep your business on the right side of regulatory compliance.
Ensure your products are responsible from the get-go. It is much easier to begin with a product that is compliant by design rather than have to reverse-engineer it down the line. Think about how you demonstrate the trustworthiness of the AI you’re building; regulations are likely to tighten; expectations are likely to get more specific.
Ben Lyons Head of Policy & Public Affairs, Darktrace
But what can founders turn to in order to ensure that they are building their AI product responsibly from the get-go? There are a fair number of responsible AI frameworks and guidance already out there, and just as many under development. This is a positive development, showing the ecosystem’s engagement with this critical issue. Those frameworks however provide high-level, abstract guidance, rather than specific and prescriptive advice. This first wave of AI policy and regulation kicking in, on the other hand, provides founders with specific boundaries to build within.
That being said, regulation is likely to evolve in a way that the underlying values of principles of responsible AI won’t. Ultimately, responsible AI is first and foremost a mindset and strategic choice. In the face of upcoming regulation, responsible AI becomes a source of strategic advantage. This can be achieved from the get-go through a combination of intent, humility, transparency and strong governance.
Intent.
The responsible development and deployment of AI technology starts with the clear, unwavering intention to do so. Founders who articulate a commitment to leveraging the super powers of Generative AI while mitigating risks and adverse outcomes set a strong “tone from the top” in which to anchor organisational culture. Identify if you are prioritising any particular responsible AI principles (e.g. privacy, security, bias and discrimination, environmental impacts). Having a clear north star will make your tech and data teams more likely to be proactively thoughtful about how to uphold those principles as they build, at pace.
Humility.
Always come from a place of humility, namely, that “you don’t know what you don’t know”, especially amid evolving norms and technologies.This is why we recommend founding and tech teams carve some time out to think through the potential impacts on users and wider stakeholder groups impacted by the technology, in its intended and unintended use contexts.
Transparency.
Transparency is essential to building trust with users and to give others confidence that you are using AI safely and responsibly. Disclose when you use AI, its role and when you are generating content using AI. Tackle the issue of risk and safety head on by disclosing information such as safety evaluations conducted, limitations of AI/model use, or the model’s effects on societal risks. Creating an internal culture of transparency will be equally important, working hand in hand with humility to pre-empt and mitigate risks as and when they arise.
Governance.
Developing organisational processes and forums to have regular discussions, assess risks, make and document decisions is instrumental to deliver on intent, observe humility, and foster transparency. These structures and processes should build over time, systematising what is, at its core, good risk management.
As Bill Gates said, ‘the Age of AI is full of opportunities and responsibilities.’ It is our collective responsibility to ensure that the new superpowers that AI impart on us are propelling us towards a better future. And while policy and regulation will create some safeguards, the responsibility falls upon us, investors and founders, to consider impacts and ethics, and mitigate potential harms, as we design the technologies of tomorrow.