First Hessian conference on AI-ready health care

23rd of November 2020

hessian.AI health.Care is the inaugural Hessian virtual conference on AI for health care. It took place on 23rd of November 2020 from 08:45 to 16:30 CET. To keep all participants safe during the time of the COVID-19 gist of a pandemic, this conference took place in an online meeting via Zoom. hessian.AI health.Care is organized by members of Medical Computing group (MEC) at GRIS, TU Darmstadt.

Register Here

General Chair: Anirban Mukhopadhyay
Organization Team: Camila Gonzalez, Moritz Fuchs, Henry Krumb

Videos & Presentations

We want to sincerely thank everyone who made this event possible. In case you missed the conference or want to enjoy the talks once again, please find the recordings on Youtube:

to the Videos

Yuri Tolkach's slides can be downloaded here [19.4 MB].



08:45 - 09:00 CET

1. Theory

09:00 - 11:15 CET

Chair: Arjan Kuijper, TU Darmstadt
  • 09:00: Bram van Ginneken:
    AI for medical image analysis: keeping healthcare affordable
  • 09:45: Kristian Kersting:
    Making deep neural networks right for the right scientific reasons
  • 10:30: Sotirios Tsaftaris:
    Doing more with less by better data representations

2. Radiology

11:30 - 13:45 CET

Chair: Dieter W. Fellner, TU Darmstadt

3. Emerging Applications

14:00 - 16:15 CET

Chair: Visvanathan Ramesh, Goethe Universität Frankfurt
  • 14:00: Dan Stoyanov:
    Towards Understanding Surgical Scenes Using Computer Vision
  • 14:45: Sriraam Natarajan:
    Human-allied AI for health care
  • 15:30: Yuri Tolkach:
    Application of deep learning in diagnostic pathology: own experience and perspectives


16:15 - 16:30 CET


Bram van Ginneken

Radboud University Medical Center

AI for medical image analysis: keeping healthcare affordable

The stimulating research vision of The Hessian Center for Artificial Intelligence talks about the first, second, and third wave of AI. In this talk, I will focus on where we stand now in applying the systems that can be built in the second wave of AI to medical image analysis in radiology, ophthalmology, and pathology. Where the research vision states that the third wave AI will be more than 'just tools' and will function as 'colleagues' to humans, I will argue that the tools of the second wave are already more than helpers to humans. Second wave AI systems powered by deep learning will function autonomously or perform tasks that humans will never be able to do, and, as such, will keep high-quality healthcare affordable.

Kristian Kersting

Technische Universität Darmstadt

Making deep neural networks right for the right scientific reasons

Deep neural networks have shown excellent performances in many real-world applications such as plant phenotyping and medical imaging. Unfortunately, they may show "Clever Hans"-like behaviour, making use of confounding factors within datasets, to achieve high prediction rates. Rather than discarding the trained models or the dataset, we show that interactions between the learning system and the human user can correct the model. Specifically, we revise the models decision process by adding annotated masks during the learning loop and penalize decisions made for wrong reasons. In this way the decision strategies of the machine can be improved, focusing on relevant features, without considerably dropping predictive performance. This is based on joint work with Patrick Schramowski, Wolfgang Stammer, Stefano Teso, Anna Brugger, Franziska Herbert, Xiaoting Shao, Hans-Georg Luigs, and Anne-Katrin Mahlein.
Kristian Kersting is a Full Professor at the Computer Science Department of the TU Darmstadt University, Germany. He is the head of the Artificial Intelligence and Machine Learning (AIML) lab, a member of the Centre for Cognitive Science, a member of the ELLIS Unit Darmstadt, and the founding co-director of the Hessian Center for Artificial Intelligence ( After receiving his Ph.D. from the University of Freiburg in 2006, he was with the MIT, Fraunhofer IAIS, the University of Bonn, and the TU Dortmund University. His main research interests are statistical relational artificial intelligence (AI) and deep (probabilistic) programming, and deep probabilistic learning. Kristian has published over 180 peer-reviewed technical papers and co-authored a book on statistical relational AI. Kristian is a Fellow of the European Association for Artificial Intelligence (EurAI), a Fellow and Faculty of the European Laboratory for Learning and Intelligent Systems (ELLIS), and a key supporter of the Confederation of Laboratories for Artificial Intelligence in Europe (CLAIRE). Kristian received the Inaugural German AI Award (Deutscher KI-Preis) 201 as well as several best paper, a Fraunhofer Attract research grant, and the EurAI (formerly ECCAI) AI Dissertation Award 2006 for the best Ph.D. thesis in the field of Artificial Intelligence in Europe. He is (past) co-chair of the scientific program committees of UAI 2017, ECML PKDD 2013 and ECML PKDD 2020. He is the founding Editor-in-Chief of Frontiers in Machine Learning and AI and is (past) action editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Journal of Artificial Intelligence Research (JAIR), Artificial Intelligence Journal (AIJ), Data Mining and Knowledge Discovery (DAMI), and Machine Learning Journal (MLJ).

Sotirios Tsaftaris

University of Edinburgh

Doing more with less by better data representations

Healthcare is under a perfect storm with AI/ML being offered as a solution to relieve at least one bottleneck: data analysis. Indeed, the detection of disease, segmentation of anatomy and other classical image analysis tasks, have seen incredible improvements due to deep learning. Yet these advances need lots of data: for every new task, new imaging scan, new hospital, more training data are needed. Amongst the vast body of our work, for this talk, I will focus on learning better representations and how they can help in addressing particular challenges healthcare applications introduce. I will present a framework for several analysis tasks that can do more (tasks) with less (need for annotated data). Within a multi-task learning setting this framework benefits from trying to reconstruct the data and also taking advantage of spatial and temporal correlation in imaging studies and from information on biomarkers in health records (stored in PACS, EHR). I will then present our solution to emulating how radiologists review images from the same patient exam. A solution that learns to compare and combine information from images without need of registration and slice pairing. Finally, time permitting, I will discuss how we can learn to predict future health state in a proxy model of brain ageing without needing longitudinal data. Applications in cardiovascular, brain, abdominal imaging will be shown.
Prof. Sotirios A. Tsaftaris, or Sotos, (,, Twitter: @STsaftaris), is currently the Canon Medical/Royal Academy of Engineering Research Chair in Healthcare AI, and Chair (Full Professor) in Machine Learning and Computer Vision at the University of Edinburgh (UK). He is also a Turing Fellow with the Alan Turing Institute. Previously he held faculty positions with IMT Institute for Advanced Studies Lucca (Italy) and Northwestern University (USA). He has published extensively, particularly in interdisciplinary fields, with more than 140 journal and conference papers in his active record. His research interests are machine learning, computer vision, image analysis and processing.

Daniel Pinto dos Santos

Uniklinik Köln

A thousand words describe a picture – radiology reporting and AI

This talk will give a short introduction on how radiologists report on the radiological exams and how they communicate their findings to the referring clinicians. The advantages and pitfalls of narrative reporting in the context of clinical routine and development of AI will be discussed as well as if and how structured reporting could change that. Some examples will be discussed to highlight how structured report data can be used to develop AI algorithms and why there is a fundamental problem with "ground truth" in radiology.

Andreas Bucher

Universitätsklinikum Frankfurt am Main

What to expect from AI in radiology?

This presentation will focus on the potential of AI applications and their adoption in radiology. How far have we already come in the vision of an AI enabled radiology? In this talk we will take a look at pressing issues standing in the way of wide spread adoption. Some of these answers can be drawn from the history of the field, others present themselves in current developments of their application.

Jayashree Kalpathy-Cramer

Harvard Medical School

Bias, brittleness and generalizability issues of deep learning in medical imaging

Deep Learning has great potential in medical imaging but concerns about bias, model brittleness and generalizability remain. We will discuss some of the opportunities and challenges in the use of these techniques in medical imaging. We will also discuss the "final mile" challenge of getting the best algorithms into the hands of clinicians.

Dan Stoyanov

University College London

Wellcome/EPSRC Centre for Interventional & Surgical Sciences (WEISS)

Towards Understanding Surgical Scenes Using Computer Vision

Digital cameras have dramatically changed interventional and surgical procedures. Modern operating rooms utilize a range of cameras to minimize invasiveness or provide vision beyond human capabilities in magnification, spectra or sensitivity. Such surgical cameras provide the most informative and rich signal from the surgical site containing information about activity and events as well as physiology and tissue function. This talk will highlight some of the opportunities for computer vision in surgical applications and the challenges in translation to clinically usable systems.

Sriraam Natarajan

University of Texas

Human-allied AI for health care

Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI in complex domains such as healthcare, it is necessary to make these different paths meet and enable seamless human interaction. First, I will introduce for learning from rich, structured, complex and noisy data. One of the key attractive properties of the learned models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. I will present the recent progress that allows for more reasonable human interaction where the human input is taken as “advice” and the learning algorithm combines this advice with data. I will present these algorithms in the context of several healthcare problems -- learning from electronic health records, clinical studies, and surveys -- and demonstrate the value of involving experts during learning.
Dr. Sriraam Natarajan is a Professor and the Director for Center for ML at the Department of Computer Science at University of Texas Dallas and a RBDSCAII Distinguished Faculty Fellow at IIT Madras. He was previously an Associate Professor and earlier an Assistant Professor at Indiana University, Wake Forest School of Medicine, a post-doctoral research associate at University of Wisconsin-Madison and had graduated with his PhD from Oregon State University. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning and AI, Reinforcement Learning, Graphical Models and Biomedical Applications. He has received the Young Investigator award from US Army Research Office, Amazon Faculty Research Award, Intel Faculty Award, XEROX Faculty Award, Verisk Faculty Award and the IU trustees Teaching Award from Indiana University. He is the program co-chair of SDM 2020 and ACM CoDS-COMAD 2020 conferences. He is the chief editor of Frontiers in ML and AI journal, an associate editor of MLJ, JAIR and DAMI journals and is the electronics publishing editor of JAIR.

Yuri Tolkach

Uniklinik Köln

Application of deep learning in diagnostic pathology: own experience and perspectives

Diagnostic pathology undergoes a revolution nowadays. There is a shift to using digital pathology for diagnosis whereby pathology slides are being digitized and evaluated without a microscope, just on the computer screen. Pathology archives become a source of thousands of digital images, valuable "big data", which is a foundation of computational pathology as a discipline. Deep learning is a crucial technology able to assist the physicians in routine diagnostical tasks and build prognostic and predictive medical tools based on the fusion of pathology data and clinical information. Own experience in creating diagnostic tools and short-term perspectives of using AI in pathology will be discussed.