SimpleMind AI

Embedding deep neural networks within Cognitive AI for machine reasoning and automatic parameter tuning


Didactic Session (08:00-10:10 GMT+8)

Welcome & Intro | Brittleness of NNs & Explainable AI | Medical AI Meets Reality | Keynote: A New Cognitive AI Paradigm | Machine Reasoning: A Practical Implementation | A Virtual Data Scientist: Automatic Parameter Tuning

[08:00] Welcome & Introduction (10 min)

Matthew Brown, PhD

Matthew Brown, director of UCLA's Center for Computer Vision and Imaging Biomarkers, will introduce this year's tutorial of SimpleMind at MICCAI 2022.


[08:10] Brittleness of Neural Networks & Explainable AI (20 min)

William Hsu, PhD

The rapid advancement of artificial intelligence (AI) and machine learning (ML) techniques has yielded models whose sensitivity and specificity rival those of trained human experts. However, adversarial and real-world examples have demonstrated that such models can fail in expected ways and in situations obvious to a human observer. This has prompted concerns over their reliability and trustworthiness in real-world clinical practice and hampered clinical application. Explainable AI and Cognitive AI are emerging fields arising to address these challenges.

In this talk, I will review the limitations of current deep neural network models and their “brittleness” in the face of new and real-world data. We will also explore the key role of Explainable AI techniques in creating more transparent systems that enable a better understanding of modes of failure and promote trust in AI outputs. I will present a taxonomy of different types of explanations and highlight techniques that interrogate the model based on internal structure or the model's response to perturbations in the input. I will discuss experiences in applying and interpreting the results of these techniques drawn from my work in lung and breast cancer screening and other published research.

Finally, I will assess the limitations and opportunities of current work in interpretable AI/ML, emphasizing the need for visualizations that aid clinical end-users with understanding model outputs and tools for ensuring the validity of model predictions over time. Future users of these models should not see them as a “black box” but demand greater transparency from model developers in conveying the rationale behind the chosen representation, how the model was trained, and the explanation associated with a model's prediction.

Learning Objectives
  1. Understand issues contributing to AI models' lack of generalizability (“brittleness”) to new and real-world data.
  2. Learn about Explainable AI techniques that can contribute to the trustworthiness of AI models.
  3. Appreciate how the need for reliability in clinical practice, coupled with insights from Explainable AI, can drive the development of a Cognitive AI layer that guides deep neural networks.

[08:30] Medical AI Meets Reality (20 min)

Jonathan Goldin, MD PhD

The strength of AI is that it can recognize and quantitate complex patterns in images in a reproducible way, including image features not detectable by the eye. AI can also enable the aggregation of multiple data sources including images, genomics, and clinical data into powerful integrated diagnostic systems. AI‐based monitoring can capture a large number of complex discriminative features across images over time that go beyond those measured by human readers. The detection and measurement of radiomic features and ML models are subject to measurement variation due to different image acquisition parameters and physiologic conditions. Additionally, despite the numerous publications on their use the algorithms need more extensive clinical validation before they can be routinely used in treatment assessment.

A major barrier to the development of clinically useful algorithms is the lack of sufficient good quality data to develop and clinically validate the candidate algorithms. There are tremendous challenges with respect to data collection, curation, bias, and reliability. Although private and public data sets are becoming more available, this data is very heterogeneous in terms of acquisition image quality and curation and at the same time lack the size and variability needed for reliable real-world application. In addition, algorithm output performance is currently prioritized over analysis of the nature and clinical impact of errors. It has also been observed that some errors are also obvious to a human observer and difficult to explain. Such concerns have resulted in the emergence of a new field of active research to improve the interpretability of AI. They also motivate the need for additional knowledge layers to be added to avoid mistakes and provide transparency and trustworthiness to decision making. This will likely be important for both the increased acceptance of AI assessments by clinicians and regulators, to enable AI to transition into routine clinical practice.

These challenges likely, at least in part, explain why despite the thousands of published studies of algorithms in medical imaging, very few published prospective trials demonstrating their clinical validation exist. Demonstrating clinical utility goes beyond performance validation to the testing of clinically meaningful endpoints. High performance on commonly used analytic validation measures including, area under the receiver-operating characteristic curve, sensitivity, or specificity, does not replace the need to also demonstrate clinical validation. The latter requires demonstration of the AI algorithm output with respect to patient outcomes. AI is expected to significantly influence the practice of radiology and crucial issues of reliability, trustworthiness, and validation remain to enable translation to clinical practice.

Learning Objectives
  1. Understand the current state of AI use in routine radiology clinical practice and the barriers to more widespread adoption.
  2. Learn about the needs of increasing AI reliability, generalizability, andtrustworthiness.
  3. Gain insights into the need and potential of machine reasoning in making AI decision making more reliable and transparent.

[8:50] Keynote: A New Cognitive AI Paradigm - From Neural Networks to SimpleMind (30 min)

Matthew Brown, PhD

Currently the work of a data scientist training deep neural networks involves hand tuning of deep neural network (DNN) architectures, learning hyper parameters,and algorithms for pre-/post-processing of DNN inputs/outputs. Their knowledge is coded ad hoc in scripts, with limited application of common-sense reasoning and limited hand tuning of parameters (both in learning and pre/post processing). The result is application-specific Narrow AI, that is typically suboptimal in terms of parameter search and limited as to the level of knowledge and reasoning applied. These shortfalls impact the performance of AI systems and leaves them vulnerable to errors that are obvious to a human, thereby limiting real-world clinical application and adoption.

Cognitive AI is broadly defined as enabling human level reasoning and intelligence, but specific implementations have been lacking. In this talk, we will introduce a new Cognitive AI paradigm that tackles the limitations of current Narrow AI. Inspired by insights from Cognitive Science on how human perception applies knowledge and reasoning to reliably interpret images in an unpredictable real world, we will explore how deep neural networks can be embedded within a Cognitive AI framework for medical image segmentation and interpretation. We will discover how Cognitive AI can provide a layer of machine reasoning atop DNNs, applying knowledge where representative training data maybe limited and using reasoning to avoid common sense mistakes.

We will introduce a practical implementation in which a Cognitive AI software framework supports development tasks that currently require human intelligence in the form of a data scientist, to accomplish them more efficiently and optimally by:

Learning Objectives
  1. Understand the limitations of Narrow AI in medical imaging and the potential of Cognitive AI to address them.
  2. Learn about a Cognitive AI framework to support and improve deep neural networks.
  3. Gain insights from case studies on how a Cognitive AI approach can be applied in medical image segmentation to improve the accuracy and reliability of DNNs.

[9:30] Machine Reasoning: A Practical Implementation (20 min)

Youngwon Choi, PhD

Similar to human intelligence, artificial intelligence (AI) can be built upon learning and reasoning. Machine learning is to approximate functions from the data, while machine reasoning is to answer a new question from previously acquired knowledge. Machine learning has come to dominate AI, with reasoning approaches that were initially developed based on rules from human experts having limited performance due to weak feature extractors available at the time and lack of scalability of the knowledge-base. Interestingly, machine reasoning has been reviving as a complement to neural networks. The knowledge-base can now be dovetailed with strong neural network feature extractors and improved understanding of graph and causal inference.

We will demonstrate an implementation of machine reasoning with concrete examples to show how machine learning and machine reasoning can complement each other, like human intelligence. The implementation is based on a semantic network knowledge representation and software agents that make use of the knowledge base to guide neural networks. We will explore the benefits of each component implementation by practical examples. The semantic network enables to chain the learnings from previous solutions, rules from the domain expertise, neural networks, and all the machine learning methods from the knowledge-base. We will demonstrate examples of the semantic network that can improve the performance of the neural network and solve the decision-making problems based on relational concepts, which is a common challenge in practical AI. The agents evaluate candidates of each node in semantic network and match them to find the best option. This resolves the conflicting recommendations from multiple machine learning methods with the rules from domain experts and common sense. The practical examples will demonstrate that reasoning can reduce human overreading by preventing AI from dumb mistakes. The separation of the node, link, and agent in implementation brings us other benefits. It brings a user-friendly interface in adding new rules and designing the semantic network. It will also provide the scalability of the knowledge-base with powerful state-of-art machine learning methods.

Learning Objectives
  1. Understand machine reasoning and how machine learning and machine reasoning can complement each other.
  2. Gain an introduction to a practical implementation of knowledge and graph-based reasoning well fit on chaining multiple neural networks and resolving conflicts.
  3. Gain insights from case studies on how machine reasoning can boost performance beyond neural networks alone.

[9:50] A Virtual Data Scientist: Automatic Parameter Tuning (20 min)

M. Wasil Wahi-Anwar, MS

In machine learning, each step of preparing, training, and executing the model requires selection and tuning of a set of parameters. Whether it’s preprocessing the data, tuning the model’s hyperparameters (e.g. a CNN’s learning rate, batch size, input size), or refining the output through post-processing, each tunable step can be pivotal towards the final performance of the model. This is magnified when incorporating machine reasoning, which can incorporate a multi-node semantic network of chained neural networks, processing steps, semantic knowledge nodes, and logic-based reasoning.

Classically, each node's hyperparameters are chosen by the data scientist, through intuition or naive, limited grid-based search. We introduce a concept referred to as “automatic parameter tuning” (APT), which provides a systemic manner of testing sets of hyperparameters from the entire semantic network and holistically optimizing the collection of nodes in concert with each other.

We exhibit the genetic algorithm as a practical implementation of an intuitive, explainable optimization tool for Automatic parameter tuning. Encoding each set of parameters as a binary “chromosome”, the genetic algorithm reflects nature by assessing the performance of each parameter set and propagating forward high-performing parameters. “Evolution” occurs through random “mutation” and “crossovers” of and between parameter set chromosomes, and, together with selective chromosome propagation, act as a manner of intelligent search towards the most optimal set of parameters.

Overall, APT allows a system to try many different combinations of hyperparameters for each step of an intelligent, machine-reasoning pipeline, finding the combination most conducive to the task at hand — in essence, functioning as a “virtual data scientist.”

Learning Objectives
  1. Understand the need and utility for automatic parameter tuning (APT) for optimal network performance.
  2. Introduce the practical implementation of the genetic algorithm as an APT agent.
  3. Demonstrate practical benefits and improvements in real-world medical applications.

SimpleMind (SM) Framework: Hands-On Workshop (10:10-11:30 GMT+8)

Youngwon Choi, PhD; M. Wasil Wahi-Anwar, MS; John Hoffman, PhD

Attendees will be walked through a hands-on tutorial utilizing SimpleMind (SM), which incorporates the pivotal components of machine reasoning (MR) and automatic parameter tuning (APT) inside of a developer-friendly tool to start and fine-tune semantic networks. The tutorial will go step-by-step through the building, tuning, and execution of a sample SM model, applied in a specific medical task using publicly available medical data. Please bring a laptop to participate!