Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

While providing cybersecurity and data protection services to the aged care industry over the past five years, I have seen the number of individuals requiring aged care increase significantly.

The World Health Organization predicts the number of individuals aged 60 and above will reach 1.4 billion by 2030 and 2.1 billion by 2050, while the number of people aged 80 and older is expected to triple between 2020-50.

Medical advances in health care and the pharmaceutical industry mean people are living longer than before. Colleagues at aged care facilities say the increase will lead to significant demand in aged caregivers, medical diagnosis facilities, nursing and aged care facilities. The gap between the supply and demand in aged care facilities is expected to be high in the next five to 10 years, which is why organizations are exploring ways to fill it.

The chief information officer of one aged care facility procured an artificial intelligence management system that would provide AI agents trained by staff to act as smart caregivers. They would advise the elderly, record medicine taken — under staff supervision, and act as social companions. The AI agents would also help with automatic scheduling of staff.

The CIO engaged our organization for advice on the safe and responsible use of AI, keen to ensure no negative impacts from the implementation. Unlike the EU, where the AI Act regulates organizations implementing AI management systems, Australia does not currently have a mandated law on the use of AI. So we recommended conducting an assessment based on the ISO/IEC 42001 controls that are applicable to the AI management system of the aged care facility.

Selecting an AI governance framework

The aged care industry in Australia generally falls under the small and medium-sized business category, usually serving 1,000 to 2,000 residents with a staff of 200 to 800. Budgets are usually limited which is why it is important to choose the most appropriate AI governance framework that would comprehensively aim to enhance the security posture of the AI management system.

I started by exploring local acts and frameworks in Australia. The Australian government has not passed any AI-related framework or act. The closest set of principles are the AI Ethics Principles — eight principles designed to ensure AI is safe, secure and reliable — but they are not detailed comprehensively enough to be used for an assessment.

The U.S. National Institute of Standards and Technology's AI Risk Management Framework is specifically focused in this area and emphasizes the identification, assessment and mitigation of AI-specific risks throughout the AI life cycle. However, I was searching for an AI governance framework that would go beyond risk management.

As an IAPP Certified AI Governance Professional, I procured and studied the ISO 42001 framework, which I found to be a comprehensive standard to establish, implement, maintain and continually improve an AI management system. After consulting with other members of the team, we provided an overview of our approach to the CIO.

The approach and challenges

ISO 42001 consists of 10 clauses, of which clauses 4–10 and the associated requirements are relevant for an assessment. We conducted the assessment using four different phases — define, implement, maintain and improve — based on how the clauses and requirements were structured. The normative annexes for ISO 42001 that outline the control objectives necessary for AI governance and risk management were also taken into account during the assessment.

The initial challenge was time management of the assessment. Given such an assessment had not been conducted before and the facility's limited funding, we were expected to do more for less and did not have the luxury of including a discovery component within the project.

We planned for the project to occur over one month, beginning by assessing the requirements of Clause 4 — Context of the organization, Clause 5 — Leadership, and Clause 6 — Planning. We were pleased to discover that the context of the organization was clearly defined. For example, the scope was well set, and the requirements of providing smart caregiver AI agents and the automated scheduling system of human caregivers was adequately established.

The organization's leadership was also committed to integrating the AI management system into business processes. The biggest challenge, however, was the lack of a clear policy on how the system would be used and how the organization would ensure fairness, non-bias and human oversight and supervision.

The policy also lacked guidelines regarding AI impact assessments, AI risk assessments, data governance and continuous improvement plans for the AI management system. The roles and responsibilities were also not clearly defined and there was no AI ethics committee or awareness plan for the developers and caregiving staff who would be using the AI management system.

Since the management system was in place, we noted these gaps and recommended a proper AI policy be drafted as per the control objectives in Annexes A and B specifically controls around policy development and internal organizational roles and responsibilities.

We also determined the lack of a proper AI policy could have significant adverse impacts on staff and caregivers' morale.

We proposed crafting an AI strategy for the organization to clearly identify the existing risks of the AI management system and how they would be addressed. The strategy would include plans to handle the risks and ensure measurable AI objectives are communicated within the organization, as well as proper management plans to keep the system up to date and aligned with the AI policy and strategy.

We then went on to assess the ISO 42001 clauses related to operations or implementation: Clause 6 — Clause 7 — Support, and Clause 8 — Operations. The aged care organization had competent staff and resources to support the AI management system but was not clearly documenting any system architecture or technical procedures being implemented.

The organization also did not have a clear communication, triage and awareness plan due to the lack of an AI policy, and did not conduct AI risk assessments nor have a robust AI life cycle plan. These gaps were also noted and a mechanism was proposed to document any artifacts and processes related to the AI management system and the AI life cycle plan was to be designed and built via the AI strategy as stated in control objectives in Annexes A and B.

After this, we moved on to the monitoring requirements in ISO 42001's Clause 9 — Performance Evaluation. There was an absence of monitoring measures, internal audit schedules and management reviews all of which we proposed to draft for the aged care industry in a performance evaluation plan.

The final assessment was conducted on Clause 10 — Improvement where we proposed a continuous improvement plan for the AI management system that would include handling any nonconformities.

A report presenting the gaps and recommendations was well-received by the organization's senior stakeholders, who set aside funding to implement our suggested roadmap.

Conclusion

Many organizations are rushing to implement AI management systems and integrating them within business processes without adequate AI governance in place.

In Australia, especially, there are currently no laws mandating the safe and responsible use of AI. This is changing and there are proposals in place to ensure such laws are implemented, but until then, it is imperative that organizations looking to implement AI management systems do their due diligence and have proper AI governance safeguards in place.

Our AI governance assessment clearly identified missed safeguards. The negative impact of using AI systems without human supervision, especially in the aged care industry, can have detrimental consequences, but they can be avoided.

Organizations should embrace new technologies with cautious optimism and be diligent enough to ensure appropriate safeguards and governance are in place to prevent disastrous consequences.

Asadullah Rathore, AIGP, CIPP/E, CIPM, FIP, is a head of professional services at Excite Cyber.