An article in the Harvard Business Review suggests that one way to ensure algorithms deliver unbiased decisions is to explore the possibility of engaging an auditing system to verify specific considerations, such as transparency, fairness and competency. As algorithmic decision-making and artificial intelligence take a greater hold of decision-making environments, the authors write that its impact becomes societal. They note that much like companies use auditing to shed light on internal operations, a similar process could be used to break down “black box” algorithms. They add, “adopting systems of governance and auditing helped ensure that businesses broadly reflected societal values.” Editor's Note: Lokke Moerel’s recent Privacy Perspectives post on machine learning looks at how to eliminate discrimination in algorithms.
If you want to comment on this post, you need to login.