CISOs must secure AI with Asimov's "Three Laws"

If AI creators use a ‘three laws’ approach, building in security by design, following secure AI guidelines and frameworks, AI may live up to some of the hype...

If AI regulation had an advertisement, it would say, ‘coming to a market near you - very soon!’ While the UK government is looking at regulating AI with a light touch, the EU has published its draft legislation focusing on stringent governance of high risk AIs, writes Heather Hinton, CISO, PagerDuty.

Organisations doing business across multiple jurisdictions will have to juggle their AI security and workflows to fit best practices, cost efficiencies, and all-encompassing regulatory compliance concerns. Above all, the question of how secure the model, its training data and outputs are will colour how AI can be best used. This seems to put CISOs at the centre of an unclear, evolving and potentially risky situation.

The tech industry already has 50 years’ worth of evidence which demonstrates that if information security is not built in from day one (security by design) it becomes exponentially more difficult, error prone and expensive to bolt on later. Much of the web was built in more carefree times. The lack of security led to the insecure world now remediated by dozens of security processes, devices and an ‘internet aftermarket’ served by majorly profitable cybersecurity businesses keeping us all safe - and adding to everyone’s costs.

Just look at the experience of preparing for GDPR; the size and scope of retro-fitting privacy-by-design efforts during readiness efforts was a major headache for all affected organisations. Ensuring that data handling meets required standards for the security of personally identifiable information is no joke. Efforts encompass business, IT, and legal or compliance-led oversight practices.

In a world where AI capabilities and adoption are rapidly evolving and growing, there is no reason to believe that an after-the-fact ‘safe-AI by design’ approach would be any different to our collective GDPR experiences. CISOs would be well served to get ahead of these challenges and start their preparations to get AI security practices in a good shape ahead of expected regulatory - and prudent business - demands. This will ease the burden on the business and serve to beat the market in the race to offer ‘oven-ready’, trustworthy AI services.

CISOs can start with Asimov’s ‘three laws’

Designing safe AI by design can start with the first of the famous literary Three Laws of Robotics, adapted to AI: ‘An AI may not injure, or cause a human being to be injured, or, through inaction, allow a human being to come to harm’. With gratitude to Isaac Asimov for the original, this starting point allows us to apply our design principles. These can then be supplemented with unique principles for safe AI by design, including:

● Confidentiality, integrity and availability of processed data

● Ethical training including bias awareness and removal from the training model

● Confidentiality and privacy protections for the requestor

● Integrity of the response including unbiased and ethical responses

● Responses to an AI, a dataset and a request/requestor

● Compliance with laws like the EU’s, for higher risk use cases

Adhering to such principles allows companies, users and citizens alike to create, use and enjoy the incredible and powerful AI technologies that many find ‘exciting’, and are forecast to supercharge all aspects of life. The aim must be to offer the best outcomes for business profitability, managing societal impact, corporate and individual reputation and psychological and physical safety in the user experience over the long-term - and we get there by designing in security from the outset.

The regulation debate

As with any new technology, some ask if regulation is the right way to make sure that AI works to our collective advantage. While every action, regulatory or other, has a reaction, it is not clear, to me at least, that not regulating AI in the name of not slowing down business is the right way to go. There was a backlash when vehicle seat belts were proposed! Every regulation tends to be greeted by a chorus of doom saying it will negatively impact the industry and personal freedom.

Yet we regulate every industry for quality and safety reasons. AI regulation could soon be just as important to our lives as vehicle or food safety. AI will continue to be developed, trained, tested, explored and used, regardless of what regulations are in place. Individuals and corporations will publish their own standards and guidelines, such as PagerDuty’s Guidelines and Google’s Secure AI Framework. The opportunity for regulations developed in concert with practitioners is too good to pass up, especially when we note that the notion that regulation will have a chilling effect is just not born out by evidence of past regulations on advancing technologies.

Regulations stipulating the principles to be followed allow companies, users and citizens to determine the best way to manage this incredible and powerful technology and harmonised the patchwork of individual standards, guidelines and ad hoc approaches that littered the landscape This more or less succeeded with GDPR and was directly responsible for innovation in the areas of data protection and the technical and organisational measures needed to comply.

The greatest threat, opportunity, or waste of breath of a generation?

With the EU’s draft legislation now able to be reviewed, it seems less likely that recent doom-laden announcements like those of leaders from OpenAI, DeepMind, Microsoft, Tesla, and Google’s former CEO will come to pass. That’s if we assume compliance!

If AI creators use a ‘three laws’ approach, building in security by design, following secure AI guidelines and frameworks, AI may live up to some of the hype, giving us a Star Trek-style future with a post-scarcity economy on solid technological foundations. But rest assured, building AIs that really support human potential requires the security by design approach. In fact, in addition to security, developers should ideally be baking in privacy, ethics, and the ability to explain and to be corrected. Basically, the ability to understand, correct and therefore to trust in the models - and the reasonability and fairness of AI outcomes.

Where AI used in business can be risk quantified and trusted, it will lead to higher return on investment, higher quality of service and a virtuous cycle of use borne out of all these factors that feed into overall suitability and reliability.

As a CISO I am excited. This is a real moment of change. This is the time to use our most foundational, smartest practices to build in security for what could be the most revolutionary technology invented for a century. If the AI community and the security community can work together to ensure that security is a key part of AI, our industry may be doing a service as important as ensuring vehicle or vaccine safety. This is the moment to put security in the service of humanity. The benefits may be exponentially positive.

See also: Veeam CISO Gil Vega on security culture, sleeping at night, guarding POWs, tips for CISOs