Learning from deep learning

Deep learning is a machine learning technique that allows computers to do what comes naturally to humans: learn by example. With enough data and power and a well-designed experiment, these artificial intelligence (AI) networks clearly outperform competing techniques. Image analysis is one area with particularly convincing results, demonstrating the high performance of convolutional neural networks (CNNs) in the field of medical diagnostics. However, the main concern is that the systems appear opaque, and the basis of their predictions is not traceable by humans. This “black box” nature of the systems prevents us from learning what the systems have discovered as important features.

A rapidly increasing number of publications now demonstrate high performance of convolutional neural networks in medical diagnostics. However, few of these systems have reached the clinic, an important reason being their “black box” nature. We have recently developed the DoMore-v1 classifier, a deep learning system for predicting patient outcome in colorectal cancer (Skrede et al., Lancet 2020;395:350-360). When independently tested on 1122 patients, the classifier outperformed all current prognostic biomarkers. However, an intriguing question remains: How can neural networks utilizing plain microscopic tissue images to predict patient outcome years later? Recent developments have provided methods that enhance our ability to visualize image areas of particular importance to network predictions. However, it seems unlikely that satisfactory understanding can be obtained without supplementing such information with concrete biomedical information, including biochemical measurements on cellular level. This latter information must be provided on image form, aligned to the images showing areas of particular importance to outcome predictions. Thus, the first work-packages in the project develop methods for simultaneous displaying various biochemical markers in pathological images and further develop tools for identifying the same cells in different images. The next packages aim at adapting and testing visualization methods to make them suitable for revealing features in pathological images utilized by prediction networks, while the last project activity is to connect important image characteristics and the biomedical markers.

Through this combined biological and machine learning approach, we intend to provide methods to make our own and similar networks more transparent and thus easier to use for clinicians, as well as to improve our understanding of the biological mechanisms underlying metastatic disease.

This text was last modified: 27.02.2023

Chief Editor: Tarjei S. Hveem, Interim Institute Director
Copyright Oslo University Hospital. Visiting address: The Norwegian Radium Hospital, Ullernchausséen 64, Oslo. Tel: 22 78 23 20