Self-learning algorithms are deployed in countless products and services throughout the digital ecosystem, but there have been lots of instances where those algorithms exacerbate discrimination. However, Lokke Moerel, Morrison & Foerster senior of counsel and Tilburg University professor, notes, "If self-learning algorithms discriminate, it is not because there is an error in the algorithm, but because the data used to train the algorithm are 'biased.' ... It is only when you know which data subjects belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly." In this post for Privacy Perspectives, Moerel argues why the "taboo" against collecting sensitive data such as ethnicity or gender "should, therefore, be broken" in order to eliminate "future discrimination." Editor's Note: Moerel will speak about algorithms later this month in Brussels at the IAPP Europe Data Protection Congress.
If you want to comment on this post, you need to login.