Editor's note: The IAPP is policy neutral. We publish contributed opinion and analysis pieces to enable our members to hear a broad spectrum of views in our domains.

Australia has continued to confirm its position on artificial intelligence in recent weeks with Privacy Commissioner Carly Kind's signature to a joint statement on building trustworthy data governance frameworks to encourage development of innovative and privacy-protective AI. Information Commissioner Elizabeth Tydd gave a speech during the Artificial Intelligence, Law and Society conference in February, advocating for three key principles to be applied when using AI: transparency, regulatory cohesion and regulatory effectiveness.

However, Australia does not currently have specific laws that govern AI, unlike regions such as Europe with its AI Act. Instead, federal government departments have confined AI-specific policies to a number of guidance notes and standards, including the Office of the Australian Information Commissioner's guidance on privacy and generative AI models and the Department of Industry, Science and Resources Voluntary AI Safety Standard and proposed mandatory guardrails for AI in high-risk settings.

OAIC guidance on privacy and generative AI models

The OAIC summarized five key principles for AI developers which are subject to the Privacy Act in its guidance. Privacy Act obligations will most likely be applicable to organizations with an annual turnover of AUD3 million or more. The OAIC's AI principles consider how Australian Privacy Principles 1, 3, 5, 6 and 10 apply in the context of generative AI.

Key Principle 1: Accuracy when training AI models

Developers must take reasonable steps to ensure personal information that is collected, used or disclosed is accurate, up-to-date, complete, and, in the case of personal information being used or disclosed, relevant according to the purpose of use or disclosure. This principle closely mirrors APP 10, and which may pose significant challenges for developers in terms of establishing the accuracy of personal information that may, inadvertently or not, be included in the large datasets required to train generative AI models.

Key Principle 2: Privacy laws still apply to publicly available personal information used to train AI models

A developer must ensure that even publicly available personal information collected to train an AI model is compliant with privacy laws, including that the information was fairly obtained. The OAIC considers collection of personal information to be unfair if the individual in question is not aware of the collection, such as through data scraping.

Developers should first decide the current, not future, purpose of the AI model to determine if it can be trained without using personal information, or at least with a lesser amount/fewer categories. Developers should also refer to APP 3 with regard to the notice and transparency obligations owed by an organization to individuals when collecting their personal information.

Key Principle 3: Use of sensitive information when training AI models

Developers will generally require consent from individuals to use sensitive information when training AI models, which can be tricky to show or obtain if the information has been scraped from another source. They should confirm if sensitive information has been included as part of the training dataset and delete it if they cannot show either valid consent for collecting the data or an exception to consent. Again, APP 3 should be reviewed at this point.

Key Principles 4 and 5: Use and disclosure obligations for AI models

Under APP 6.1, organizations, including developers, can only use personal information for the primary purposes for which it was collected, with some exceptions. One exception is if the individual has consented to the collection of their personal information for the secondary purpose of training an AI model, in which case the developer should ensure information about the AI model is as accessible as possible to satisfy the "adequately informed" criterion, among other requirements for valid consent, available in the guidance note.

Another exception to the prohibition against using data for a secondary purpose is the "reasonable expectations" test, where the use of personal information is still valid if the individual would reasonably expect it to be used for the secondary purpose. Here the OAIC notes when training AI models, "updating a privacy policy or providing notice by themselves will generally not be sufficient to change reasonable expectations regarding the use of personal information that was previously collected for a different purpose."

Additionally, the secondary purpose needs to be related, in the case of sensitive information it needs to be directly related, to the primary purpose of collection. The OAIC points out that a direct link for sensitive data "will be difficult to establish where information collected to provide a service to individuals will be used to train a generative AI model that is being commercialised outside of the service (rather than to enhance the service provided)." If developers cannot show a valid secondary purpose via individual consent or reasonable expectations, the OAIC recommends an opt-out mechanism be provided with sufficient information about the intended secondary use.

Department of Industry, Science and Resources’ Voluntary AI Safety Standard

While the OAIC guidance provides a number of practical tips and examples for developers to meet their privacy obligations when training and implementing generative AI models, the Department of Industry, Science and Resources' Voluntary AI Safety Standard provides 10 guardrails to guide organizations generally on the use of AI.

  1. Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance.
  2. Establish and implement a risk management process to identify and mitigate risks.
  3. Protect AI systems and implement data governance measures to manage data quality and provenance.
  4. Test AI models and systems to evaluate model performance and monitor the system once deployed.
  5. Enable human control or intervention in an AI system to achieve meaningful human oversight.
  6. Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content.
  7. Establish processes for people impacted by AI systems to challenge use or outcomes.
  8. Be transparent with other organizations across the AI supply chain about data, models and systems to help them effectively address risks.
  9. Keep and maintain records to allow third parties to assess compliance with guardrails.
  10. Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness.

Many of the recommendations in both the Voluntary AI Safety Standard and the OAIC's guidance note are similar to the obligations for providers of AI systems enshrined in the EU AI Act. However, the Australian government has not yet made AI-specific obligations legally binding on deployers of AI systems. As mentioned, developers are still legally required to comply with the Australian Privacy Act if the AI system involves personal information. However, the closest comparison to binding AI obligations in a general context are the proposed mandatory guardrails for AI in high-risk settings, but those have not progressed further than a September 2024 public consultation. Without an enforceable regime specifically for AI, Australia may struggle to achieve the regulatory cohesion and effectiveness currently aspired to by government.

Rosie Evans, CIPP/E, is a senior investigator with the Australian Competition and Consumer Commission.