Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

"It's just a prototype" often becomes "We are rolling it out next week." Sound familiar?

If you're in the governance, risk, or legal field, you've probably heard this before: a demo suddenly ready to launch without time for review, documentation or risk assessment. This is not just a governance challenge anymore; it's becoming a feature of how modern software gets built.

Enter vibe coding: a developer and coding-focused AI that work together like two programmers, talking back and forth to build software.

Most developers in vibe coding are non-coders. The method enables anyone to create projects and web apps without manually writing a single line of code. This is especially helpful for developers who need to run a proof of concept or create a prototype to demonstrate the solutions. A website or app can be fully working within minutes, and any non-coders can create applications that tailor to their needs.

I have tested the method out by using a large language model to create a few prototypes and found the process fun and useful. The experience taught me many things — from searching a particular application programming interface, generating Stripe application programming interface keys, to coming up with a formula to simplify the process. I also worked on implementing controls and identifying possible flaws through testing.

It's easy if you just want a prototype, but it's not easy if you expect to roll the results out.

First, there are security vulnerabilities. AI-generated code can introduce inherit biases, bugs or even create subtle backdoors if not properly reviewed. Research has shown significant security risks are present in the code — whether it's outdated or contains existing vulnerabilities. Data can be pulled from different sources or libraries without the context or proper vetting, which also poses a supply chain risk. Unlike the traditional development cycle that utilizes security-by-design, AI-generated code often lacks security. The AI system operates on pattern recognition and completing patterns.

For example, after removing the payment feature from my experimental application, the system would only remove the front-end feature, not the back-end information like my Stripe API keys. I had to ask the AI system to specifically remove the keys; I then reviewed the code myself to ensure the information was not exposed. This applies to other situations where personal information could be directly exposed by the developer, the AI-generated code from outside sources, or even patterns in the AI's training data. An AI system might unintentionally include common flaws like SQL injection, cross-site scripting or weak authentication.

Recent regulatory pressure has significantly altered the landscape. Export control risks for AI-generated software are rapidly evolving, becoming more complex with vibe coding than with traditional AI system development. It blurs traditional boundaries between who wrote the code, what data it was trained on, and where that code can legally go. The generated code itself becomes the item under scrutiny. Its source, derivation and potential end-use are critical. In response, governments are expanding export regulations to cover not just the models themselves, but also the downstream software they help produce.

This Black Box problem inherent to AI — where it takes in data and delivers results, but exactly how it arrives at those results is largely hidden — is particularly amplified in vibe coding. When AI generates code from a natural language prompt, the exact origin of every line — the training data from which it was derived — is often opaque. It's like bringing a potluck dish where you don't know who cooked what or the ingredients. It might look appealing, but you're trusting that every contributor followed safe practices and didn't include prohibited elements.

If AI-contributed snippets are from export-controlled algorithms or proprietary code, a compliance risk emerges. Furthermore, a seemingly harmless e-commerce website, if AI-generated, could contain highly sophisticated algorithms — such as for data analysis, optimization or encryption. These algorithms could be reengineered for prohibited dual-use purposes, a concept known as "stolen heart."

Although regulators rescinded the AI Diffusion Rule — a proposed U.S. regulation aimed at controlling the international spread of powerful AI technologies — in May 2025 before it took effect, they have not backed away from the issue. Instead, regulators are recalibrating enforcement toward more practical, risk-based controls. The current focus has shifted from broad-based restrictions to targeted oversight of specific use cases, deployment contexts, and cross-border infrastructure risks — particularly around compute access, chip diversion and advanced model deployment.

Finally, while AI can rapidly generate functional code, its "vibe" nature means it's optimized for speed and polished user interfaces, not necessarily for long-term sustainability, scalability or alignment with architectural standards.

This leads to what's known as technical debt. Much like credit card debt, it's satisfying at first. A functional website or application can be up and running incredibly fast. This immediate gratification, much like a credit card's instant purchasing power, allows for speed and features to be instantly "bought" without doing all the foundational work upfront.

However, this debt accrues interest that shows up as a slower future development, where every new feature or bug fix takes longer because of the need to navigate complex, inconsistent or poorly structured AI-generated code. Changes could introduce new errors due to inconsistencies. Just like with credit card debt, the initial rush feels great, but eventually, that accumulated debt needs to be paid.

So, what is the best way to govern AI while mitigating risks and maintaining the good vibe?

Mandatory human-in-the-loop review. This is a key principle in AI governance and a primary detective control in the AI development lifecycle. For example, the system might hallucinate a reference, fact or link that does not exist, such as the height of Mount Everest in feet versus meters or the date on the Chinese calendar versus the Gregorian calendar. These details can only be detected by humans with an understanding of the context. AI-generated code must be treated as a first draft, not a finished product; it must be crafted to make sure it meets all pertinent requirements and standards.

Scanning is crucial. Some software composition analysis tools in the market are excellent at identifying open-source components and their associated licenses. However, look for advanced SCA tools that can potentially detect specific algorithms or code patterns that might trigger export control flags. Audit logs will be helpful. 

Test, test, test. Develop extensive automated test suites, whether a comprehensive unit test, integration test, or end-to-end test. Treat AI-generated code the same as human-written code. AI can assist in generating tests, but it is important to have a human review and define critical test cases.

Contextual feedback loops are helpful. Explore ways to provide feedback to the AI on rejected suggestions or preferred coding patterns to improve future outputs. Don't just reject and move on; explain why. The "why" can help the AI system's ability to learn and adapt for ongoing interactions.

This granular level of feedback enables the AI to move beyond mere pattern recognition to a deeper understanding of intent, constraints and best practices within the organization's unique operational context.

Use AI as a learning tool, asking it to explain generated code, alternative approaches or the reasoning behind its suggestions. At the very least, label the AI features as "experimental" so they are only available internally for testing purposes, preventing unapproved AI features from accidentally going live. Ultimately, it comes back to the intent of the model: what is it built for? Who is accountable?

Vibe coding is here to stay, and it demonstrates how innovation happens. Governance isn't about eliminating risk; it's about making risk visible and manageable.

ShanShan Pa, CIPP/E, CIPP/US, CIPM, CIPT, FIP, is head of AI and data governance at GlobalLogic.