As businesses look to adopt various artificial intelligence models for different functions across their enterprise, controls for governing said models and the data that supports them is rapidly becoming a top business priority.

Securiti is the latest enterprise data management vendor to enter the AI governance space with the recent launch of its AI Security and Governance solution. The tool offers customers the combination of AI model discovery, AI risk ratings, organizational data and AI tool mapping, and AI security and privacy controls.

Securiti Vice President of Marketing Eric Andrews said the new solution is an extension of its Data Command Center and seeks to apply the same data intelligence and automated privacy workflows to organizational governance of AI systems. 

Data privacy and security professionals are being moved to the front lines of AI model integration initiatives within their company's technology stack. Andrews said the first issue those professionals face is understanding the full scope of the growing AI landscape.

"Everybody is now bubbling around how to handle AI, what to do with AI; there's clearly a lot of business advantages," Andrews said. For those responsible with data governance, the first thing they often grapple with is "just getting their arms around all the different AI models that exist" within the enterprise "and trying to get inventory of that," he said. 

Securiti's new solution works in a five-step process. First, it catalogs each AI model interacting with a given enterprise's data inventory. The next step evaluates the risk-level of those inventoried AI models to protect against interactions with so-called "shadow AI," which can entail the use of unvetted AI systems within an organization and the potential compliance risks their use outside the purview of that company's governance program could cause.  

"This is different than the traditional idea of just discovering data, because it's not discovering data anymore, you're actually discovering the models themselves, and this is the new animal that needs to be managed," Andrews said. "You might have developers, for example, that are spinning up new models as they're playing around with different things. As an organization, you may have no idea what they're dabbling in, and which models they're bringing into the environment and what data they're putting in those models."

The third and fourth steps enable customers to map their company's data flows through the AI models employed and implement whatever necessary controls to ensure data security and compliance with global data protection laws. Andrews indicated "many types of controls" can be applied in these steps, including access controls or capabilities to help examine the data being fed to chatbots.

"For example, this database has a lot of sensitive financial information about the company," Andrews said. "So, I want to be very careful to make sure that there's no AI models tapping into that database, because that might inadvertently feed sensitive information into them."

The AI Security and Governance solution is programmed to adhere to the latest regulatory requirements, including provisions in the proposed EU AI Act and the U.S. executive order on AI. Guidance from entities like the U.S. National Institute for Standards and Technology's AI Risk Management Framework is also addressed with the solution.

"Our solution is based on a framework we call data and AI contextual intelligence, and our observation is that you really need to understand all of the context around your company’s data,” Andrews said. "We're automatically figuring out all (data's) context and all the regulatory laws are built into our system. All of the new AI frameworks are built into our system, so we can immediately bubble up any risks the solution identifies."