In an in-depth article for The New York Times Magazine, Cliff Kuang explores the limited transparency offered on decisions made through artificial intelligence systems and how this lack of accountability is a growing concern in AI research. “It’s a more profound version of what’s often called the 'black box' problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills,” Kuang writes. With the EU General Data Protection Regulation set to take force in 2018, which will require machine-based decisions to be explainable, the onus to develop transparent approaches to machine learning is growing. (Registration may be required to access this story.)
Full Story
Comments
If you want to comment on this post, you need to login.