WELCOME TO CENTRE FOR SOCIAL IMPACT TECHNOLOGY
GET TO KNOW US
A city-wide knowledge hub for nurturing dialogue, learning, and action on the convergence of social innovation and digital technology innovation. The vision of the Centre is to catalyze a trust-based, relationship-rich, diverse innovation ecosystem in Calgary around technology that is not only socially beneficial but socially transformative (responsible, open, inclusive, shared, and regenerative).
That's the black box problem in AI that you don't know you can't articulate you can't explain how the inputs are transformed into outputs again. You might not care a lot if we're just talking about pictures of your dog, but if you're making a judgment like: this person is worthy of an interview for a job, this person is unworthy to be admitted to this college, this person deserves a mortgage this person should have a credit limit of X. this person is likely to commit a crime with the next 2 years, and so it should be denied...
When you are using AI in high stakes situations It's a problem, at least in the face of it. It's a problem you can't explain why the AI is making the decisions that it is. You probably want to be able to explain why this person was given say a high risk rating and so was denied bail or why they were given a high risk rating for development diabetes and so given this course of treatment, or why they were denied a mortgage or credit, etc., that's the black box problem. That's. the nature of the beast of machine learning to recognize complex patterns so complex, in fact, that they defy human understanding.
Reid Blackman, Author of Ethical Machines