Artificial intelligence has undoubtedly jolted the market and pushed companies to address and adapt. In my conversations with customers, partners, policymakers and industry peers about this remarkable moment in time, there is a clear recognition of the need for AI policies and governance, but most are still working to put them in place.
This sentiment is validated and quantified by the new Cisco AI Readiness Index. In a survey of more than 8,000 private sector, business and IT leaders across 30 countries, 95% of respondents said their organizations have AI strategies in place or under development. Yet, 67% of respondents said their organizations lack comprehensive AI policies.
Urgency to develop, deploy and use AI-powered systems cannot be at the expense of effective governance. The risks of AI are real, but they are manageable when thoughtful governance practices are in place as enablers, not obstacles, to responsible innovation.
At Cisco, we looked to our privacy program as the foundation for AI governance.
In 2015, we created a dedicated privacy team to embed privacy by design as a core component of our development methodologies. This team is responsible for conducting privacy impact assessments as part of the Cisco Secure Development Lifecycle. As the use of AI became more pervasive and the implications more novel, it became clear that we needed to build upon our foundation of privacy to develop a program matching specific risks and opportunities associated with this new technology.
In 2018, we published our commitment to proactively respect human rights in the design, development and use of AI. Given the pace at which AI was developing and the many unknown impacts — both positive and negative — on individuals and communities around the world, it was important to outline our approach to issues of safety, trustworthiness, transparency, fairness, ethics and equity.
We formalized this commitment in 2022 with the Responsible AI Principles, documenting our position on AI in more detail. We also published a Responsible AI Framework to operationalize our approach. It aligns with the National Institue of Standards Technology AI Risk Management Framework and sets the foundation for our AI impact assessment process.
We use the assessment in two instances: when our engineering teams are developing a product or feature powered by AI and when Cisco engages a third-party vendor to provide AI tools or services for our own internal operations.
Through the AI impact assessment process, modeled on Cisco's PIA program and developed by a cross-functional team of Cisco subject matter experts, our trained assessors gather information to surface and mitigate risks associated with the intended — and, importantly, the unintended — use cases for each submission. These assessments look at various aspects of AI and product development, including the model, training data, fine tuning, prompts, privacy practices and testing methodologies. The ultimate goal is to identify, understand and mitigate any issues related to Cisco's RAI Principles — transparency, fairness, accountability, reliability, security and privacy.
Just as we’ve adapted and evolved our approach to privacy over the years in alignment with the changing technology landscape, we know we will need to do the same for AI. The novel use cases for and capabilities of AI are creating considerations almost daily. Indeed, we have already adapted our AI impact assessments to reflect emerging standards, regulations and innovations. In many ways, we recognize this is just the beginning. While that requires a certain level of humility and readiness to adapt as we continue to learn, we are steadfast in our position of keeping privacy at the core of our approach.