59 0

Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it , but is it really always necessary to know what’s going on inside these “black boxes”? In a recent perspective piece for Science , Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more. Edd Gent: What’s your experience with black box algorithms? Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to ac... Full story

17 April