With the introduction of the EU Artificial Intelligence Act, there is now a legal framework for AI, though resolution is still needed on issues like exceptions for police investigations and national security.

Some investors are hedging their bets on the next wave of AI, recognizing that "current-generation AI is mostly crap." Meanwhile, the market must still determine the actual utility of the AI deluge — sold as game changing, disruptive and no less than an existential risk for the human species.

Distinguishing valuable technology from the hype is traditionally difficult, especially with digital technology. Metaverse, anyone? NFTs?

So, a limited experiment might be the best approach for companies planning to adopt AI. This should consider the real-world constraints of the company's operations, as creativity often flourishes within limitations.

There are ground rules that can be used to ensure AI endeavors are both technically and practically beneficial.

Developers are not users

Jakob Nielsen famously said, "Your best guess is not good enough," while poet John Keats said "what the imagination seizes as beauty must be truth," but I submit that what the developer perceives as a user problem most often is not.

Developers understand technology. Users understand the problems they want solved, but often focus on immediate obstacles rather than deeper needs and are rarely able to request solutions in a procedurally approachable manner.

Collaboration between users and developers from the start is crucial to avoid unnecessary tech solutions for unchanged processes. Structured meetings and shared understanding documents are essential to ensure alignment. Without these, there is risk of engaging in fruitless activity and wasting time and resources.

Generative AI content is not production-ready by design

Generative AI is inherently nondeterministic, meaning responses to the same prompt will vary, and "hallucinations" ― made-up, incorrect but plausible outputs ― are common.

Calling for prompt engineering is merely giving the problem a name. Unlike humans, AI responses cannot be trusted or even understood without understanding the prompt used.

Human oversight is necessary to vet AI outputs, as accountability lies with humans and not machines.

Numbers are easy, significance is hard

Generating statistics and inferences from data is straightforward, but ensuring their significance and accuracy is challenging.

Claims like "99% accuracy" can be misleading without rigorous evidence to back them up, and a 1% error is huge when you work on millions of data points. Be prepared to explain and justify your AI's performance, particularly regarding false positives and overall reliability.

Authorities or the public may scrutinize your AI's dependability, so transparent and factual explanations are crucial. Remember the fallout from the Cambridge Analytica scandal as a cautionary example.

Personal data is people

Protecting personal data is not just about privacy, but about shielding people from harm. Data seems insignificant only because we do not foresee how it can be weaponized against us.

Illustrating this danger is Cardinal Richelieu's quote, "If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him."

Also, the data industry assumes personal data need not be fully truthful or accurate to be valuable. This mindset will lead to significant ethical and legal issues.

Everyone handling data shares responsibility for its potential misuse and AI technology must be made to prioritize citizens' welfare and ethical considerations.

Data safety

Most data you will handle is personal data, which means compliance with the EU General Data Protection Regulation and other legislation.

You may still think data is the new oil. Think again. It's the new asbestos.

You will deal with personal data either as a controller — deciding the purposes and means of processing — or as a processor — acting on a controller's instructions. Personal data always belongs to the individual, and controllers and processors are merely a custodian.

Each has a duty of care to ensure data accuracy, integrity and confidentiality. This means complying with various regulations, including the EU GDPR, the Data Act and AI Act, and ensuring access to data is strictly on a need-to-know basis.

When introducing AI, prioritize sandboxing, compartmentalization and fail-safe processing — essential precautions even before an AI project becomes a business venture. To try AI and live to tell about it, the point is not if you're being paranoid but if you're paranoid enough. Even Big Tech" is getting less and less leniency.

To experiment with AI and remain compliant, caution is necessary to navigate the complex regulatory landscape and protect personal data effectively.

Lawful reprocessing of data

Data typically results from contracts with employees, customers, clients or partners. Before using data for AI, ensure you have the legal right to do so. Do not assume permissions, and do not just scrape data off the internet without legal backing. Many companies have faced severe penalties for such practices.

Contracts not designed with AI reuse in mind may not provide appropriate lawful bases for experimentation. For instance, Meta's misuse of "legitimate interest" and "contractual necessity" resulted in substantial fines.

Ensure your contracts allow for AI experimentation, and be prepared to guarantee data subject rights, including access, rectification and deletion.

Lawful use of your own data

Your company's own data falls into two categories: employee data and business data.

Employee data, unless anonymized and aggregated, remains the property of employees. As an employer, you must keep this data secure, accurate and safe from unauthorized access under the GDPR. Your legal bases for processing are only contractual and legal obligation, and AI experiments are neither. You will need to collect consent, as you cannot process employee data without a legal base.

For nonpersonal business data, such as production or sales data, the GDPR does not apply.

However, data derived from human activity must be carefully anonymized to avoid being classified as personal data.

Chatbot applications

The issue with dialog-based natural language interfaces, known as the ELIZA Effect and present since the inception of chatbots, is the human tendency to overestimate the bots' intelligence, the truthfulness of responses and the applicability of its answers.

Human empathy toward chatbots, combined with practices like "nudging" and "neuromarketing," can lead to manipulation engines.

Attempts to frame generative AI as a general-purpose answering engine have failed miserably. It can be useful in context-constrained applications, but its economic viability remains uncertain.

Intelligent assistant applications

"Smart" calendars have been in development since the age of the Blackberry. Systems designed to augment human intellect were first postulated in 1945, led to the invention of hypertext and the World Wide Web, and have subsequently been the failed promise of the Apple Newton, the Palm Pilot and all generations of mobile applications since ― and of all research aimed at making the web suck less. Semantic Web, anyone?

There is also a tension between what is desirable and what people are comfortable sharing with selected partners, otherwise known as whoever puts money on the table. Security concerns persist, especially with Internet of Things applications, which range from insecure low-end appliances to ethically questionable high-end products like Amazon Ring and Alexa.

There may still be potential for private, on-device assistants. Then again, there may not.

Legal applications

Tech attempts to "solve" law have been ongoing for decades. Many logic-challenged tech enthusiasts have inferred Lawrence Lessing's famous "code is law" statement as truth, and disaster has ensued.

Automated contract drafting and blockchain-based "smart" contracts fail miserably due to the inherently language-based, human nature of the law. The current support for redlining and versioning makes one nostalgic for paper and would probably not come out on top economically, if anybody dared challenge the "thou shall tech" mantra and ran the numbers.

AI's role in a company is not to automate everything, but to find automations that improve the bottom line. Tech cannot replace the need for human judgment in nonclerical legal processes, but collaboration between lawyers and techies has improved precedent search and case building.

Accountability

AI cannot be made accountable, and Big Tech's beloved accountability-free algorithmic regulation is a flawed concept. The EU AI Act sets clear accountability requirements and the law will hold companies accountable for software-related harms.

Companies must prepare for accountability when deploying AI. Legal consequences for AI-related violations extend to managers and CEOs. Developers should code with the expectation that regulatory authorities will scrutinize their work.

Conclusion

Implementing AI involves navigating complex legal and ethical landscapes.

Ensuring lawful data use, understanding the limitations of AI applications and maintaining accountability are crucial for successful AI integration.

Companies must address these challenges to avoid legal repercussions and maintain public trust. Be prepared.

Walter Vannini, CIPP/E, is the global data protection officer at PPRO. Opinions are his own.