top of page
Learning Symbols for Trustworthy AI
Dr. Mayur Naik
Recent advances in deep learning have led to novel AI-based solutions to challenging computational problems. Yet, the state-of-the-art models do not provide reliable explanations of how they make decisions, and can make occasional mistakes on even simple problems. The resulting lack of assurance and trust are obstacles to their adoption in safety-critical applications. Neurosymbolic architectures aim to address this challenge by bridging the complementary worlds of deep learning and logical reasoning via explicit symbolic representations. In this talk, I will describe representative neurosymbolic systems, and how they enable more accurate, interpretable, and domain-aware solutions to problems in medicine and robotics.
bottom of page
