OPINION

The AI opportunity: Professionals as toolmakers for compliance

AI-powered coding tools have democratized software development, enabling professionals with limited coding skills to automate tedious tasks. These applications can then be rapidly iterated and improved by engineering counterparts.

Published
Subscribe to IAPP Newsletters

Contributors:

Vincent Nunan

AIGP, CIPP/E

CEO and co-founder

Waivern

Editor's note

The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains. 

We can all see that artificial intelligence has radically changed client expectations of work velocity. So, what would it take for the broader professional community to use the code outputs of large language models like Claude Code and ChatGPT to create applications, accelerating our work while maintaining quality? 

Not everything needs a polished user interface. LLMs can be used to code highly targeted applications that make crucial compliance tasks more efficient. They can be shared and re-used across the community using open-source software practices. Some difficulties remain, especially secure coding. For sensitive data access, cybersecurity still needs expertise and time from engineering. We need to be aware of regulatory challenges in our own apps, too. However, the real challenge is in creating the right mindset, one open to a future where compliance tools can be defined by us for ourselves.

The core argument

Millions of software engineers use AI-powered coding tools like Claude Code or Cursor on a daily basis. Indeed, in my experience running a startup specializing in governance, regulation and compliance automation and compliance advisory, these tools have democratized coding. They make it possible for people with limited coding skills to write Python applications to automate tedious tasks. These applications can then be rapidly iterated on and improved by engineering counterparts.

Why did I break out of my product manager and compliance expert mentality to do this? It's because there's a change happening in our professions, driven by this new technology. Boundaries are breaking down and client expectations of work velocity have been irrevocably altered by AI. We need to respond, as professionals.

We need more and better tools to accelerate our work — and this technology now allows us to build them ourselves.

There are many narrowly defined tasks within the processes around privacy, AI or cybersecurity governance currently done with expensive GRC enterprise software or simple, but inefficient, tools like spreadsheets. This work is crying out for automation.

Need to identify in real-time any new cookies in front of the consent wall within the websites of your portfolio of clients? Code it up! Want an automated check for changes in vendors or processing locations among the processors and sub-processors of clients' application stack? Hack it together yourself! You have what you need in your own hands.

The users for your app may be small in number, possibly only one. That's OK; it just needs to get the job done.

More importantly, as a community of peers within the IAPP, we share similar challenges, even if every client situation has unique elements. There are robust legal mechanisms, like Apache 2.0 Open Source Software licensing and an array of other model contracts for limiting legal risks. There are free OSS tools like VS Code, Pytest or Docker. For the more adventurous, process automation tools like n8n are cutting edge. Collectively, these shared goods reduce the burden of recurring and time-consuming development tasks.

There are significant challenges remaining. A vital skillset, and one with which many people within the GRC community are less familiar, is secure coding. Even in acknowledging that, there are clear mitigations. You could limit use cases to those requiring only information available within the target organization's website or in the public domain. This includes information about cookies, tracking pixels and consent strings — from consent management platforms. Alternatively, you could build automated searches for information within documents that are already public facing such as privacy/cookie policies or, in many cases, data processing agreements.

For tasks looking at the code of client applications or into sensitive data or documentation, you can nurture a collaboration with the engineering community in your organization. These configuration tasks may be unfamiliar to many of us, but they are core competencies for application engineers. They can easily limit access to read-only, ensuring it follows the principle of least privilege. No routes should be open within the application for unauthorized use. And, of course, the application must be scanned for malware before deployment. Implementing these measures and others requires nontrivial engineering time. 

For motivated engineers, time refining your application is time spent not manually filling in questionnaires that go out-of-date immediately. One piece of advice, however, is avoid situations where engineers are "voluntold" by management to participate. That breeds resentment.

Once the collaborative relationship exists, you can transparently see and change every line of code in your own app. It may not look pretty, but it will deliver outcomes you want. There is also maintenance burden over time, but if your applications stay small in scale and narrowly targeted, this is usually manageable.

There's still a role for GRC vendors. Your organization may not be able, for many legitimate reasons, to use their own people for building and maintaining compliance automations. Some stakeholder teams may want an application that has a graphical user interface, not a command line Linux interface. You may need support hours and maintenance can be guaranteed. Once you scale beyond a small number of users, these are real issues organizations should care about. 

There's also a small but growing number of open-source vendors in the GRC space, vendors that — in effect — give the ability to use these coding tools to add bespoke features and analytics the organization specifically needs, without waiting in a queue for the vendor or paying for bespoke development consultancy. Examples include OpenGRC, OpenVAS and GLPI

What's stopping people? I can't speak for everyone, but I've asked this question to peers in the IAPP community many times. Some are eager to give it a try and are doing so with gusto. Others are wary of "straying out of my lane," being seen as stepping on the toes of other stakeholders. 

Here's the thing: AI is inevitably going to break down the barriers to accessing the skills and knowledge that formed the basis of the legacy division of labor in GRC teams. And it's happening already. Privacy UX designers are doing more product management work and vice-versa. Cybersecurity engineers are doing more product design and management work, and vice-versa. Compliance analysts are doing more work to collate and assess risks using legal frameworks. 

All of them are trying to solve problems for their organization, often for public benefit, too, more quickly. They're doing it with confidence in their professional training and in the knowledge that, as the accountable human, they stand over the quality.

This also applies to legal counsel; they, too, should do more prototyping and lightweight application building, hopefully in collaboration with the stakeholders around them. The lane boundaries are breaking down, and tool innovation is breaking free in the GRC space.

Conclusion

AI coding, open-source GRC tools and new mindsets are coming that will change how working tools in our space are built, maintained and extended. Application building is no longer limited to vendors or internal engineering teams. It is also in our hands, as a community. The enablers are readily available. Where will we take this new possibility, as the IAPP community of practitioners? I can't wait to see. 

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Vincent Nunan

AIGP, CIPP/E

CEO and co-founder

Waivern

Tags:

AI and machine learningAI governance

Related Stories