For the past decade, computer vision and machine learning combined have enjoyed an increasing gain in popularity in multiple research domains, producing state-of-the-art results in various applications which were assumed to be difficult for computers to solve. Tasks such as image classification, scene understanding and image captioning have been addressed using deep neural networks with exceptional results. Deep Learning is an area of machine learning that which uses neural networks to recognize patterns in high dimensional data such as images.
Despite yielding good performance, sometimes it is hard to understand the reasoning behind the decisions made by a deep learning model when evaluating an image. Due to the inability of making sense of the model decisions, some researchers are considering Deep Learning in the current stage rather as alchemy, then as real science.
Recent work has shown that for systems which AI decisions are critical, it is imperative that the model’s decision process is understood by their human operators. Much effort has been done for explaining the model’s decision, however often these explanations requires technical AI knowledge, which makes it incomprehensible for operators without an AI background.
In this project, we will study the possibilities of converting classical explainable AI (XAI) techniques to easy to understand text.
To be honest…
Questions?
Have some question about it? You can send me an email