Actuarial AI Risk Manager (Responsible AI), GSK, London, UK

This role will assess AI models to guarantee they deliver the right outcomes without bias, respect privacy, and align with legal and ethical frameworks.

Detailed Position Description:

At GSK, we are the forefront of integrating AI and machine learning into our Human Resources processes and tools. The infusion of AI technology promises to revolutionise HR practices by enhancing decision-making, streamlining operations, and personalising employee experiences. However, it’s crucial that these innovations are implemented responsibly, ensuring fairness, transparency, and compliance with ethical standards. To navigate this landscape, we are seeking an Actuarial AI Risk Manager to join our HR analytics team. This role will be pivotal to assessing AI models to guarantee they deliver the right outcomes without bias, respect privacy, and align with legal and ethical frameworks.

Responsibilities:

In this role, you will:

  • Conduct comprehensive bias testing and technical reviews of AI models to assess their fairness, accuracy, transparency and explainability, and develop frameworks for impact assessments and auditing.
  • Evaluate the inherent risks associated with AI-driven HR processes, implement controls, and report of residual risks to manage them effectively.
  • Collaborate with HR data science projects, contributing to research and development within the AI ethics domain.
  • Design, monitor and explain results from AI experiments impacting HR processes, ensuring they align with company goals and ethical standards.
  • Work closely with process owners to interpret AI experiment outcomes and recommend actionable next steps.
  • Foster an organisational culture that recognises the imperativeness of fairness and ethical considerations in AI applications.

Qualifications and skills:

Why you?

  • A minimum of a master’s degree in computer science, maths, physics engineering, economics, public policy, or similar research qualification and experience.
  • 3+ years’ experience conducting bias testing, fairness assessment, and ethical evaluation of AI systems.
  • Extensive experience in Python programming and knowledge in machine learning and statistics.
  • Familiar with the legal and ethical frameworks and standards for AI, such as the NIST AI Risk Management Framework, the proposed European Union AI Act, and the Singapore Model AI Governance Framework plus understanding of data privacy laws and regulations.
  • Experience in designing and implementing AI experiments, with a track record of innovative problem-solving.
  • Proven ability to stay abreast of developments in the field of AI ethics, with a commitment to continuous learning and improvement.
  • Excellent collaboration and communication skills, with the ability to engage both technical and non-technical stakeholders.
  • Ability to influence decision making and promote the significance of ethical AI use within the organisation.

Preferred qualifications and skills:

If you have the following characteristics, it would be a plus:

  • A PhD in computer science, applied math, statistics, physics, or related field.
  • Prior experience working in HR.
  • Ability to digest, synthesize, and implement innovative methods from scientific literature.

Application Submission Information:

Please apply via https://jobs.gsk.com/en-gb/jobs/390764?lang=en-us&previousLocale=en-GB