Projects
2023
Teilprojekt A2
(Third Party Funds Group – Sub project)Overall project: Quantitative diffusionsgewichtete MRT und Suszeptibilitätskartierung zur Charakterisierung der Gewebemikrostruktur
Term: 1. September 2023 - 31. August 2027
Funding source: DFG / Forschungsgruppe (FOR)End-to-End Deep Learning Image Reconstruction and Pathology Detection
(Third Party Funds Single)Term: 1. January 2023 - 31. December 2025
Funding source: DFG-Einzelförderung / Sachbeihilfe (EIN-SBH)The majority of diagnostic medicalimaging pipelines follow the same principles: raw measurement data is acquiredby scanner hardware, processed by image reconstruction algorithms, and thenevaluated for pathology by human radiology experts. Under this paradigm, every stephas traditionally been optimized to generate images that are visually pleasingand easy to interpret for human experts. However, raw sensor information thatcould maximize patient-specific diagnostic information may get lost in thisprocess. This problem is amplified by recent developments in machine
learning for medical imaging. Machine learning has been used successfully inall steps of the diagnostic imaging pipeline: from the design of dataacquisition to image reconstruction, to computer-aided diagnosis. So far, thesedevelopments have been disjointed from each other. In this project, we willfuse machine learning for image reconstruction and for image-based diseaselocalization, thus providing an end-to-end learnable image reconstruction andjoint pathology detection approach that operates directly on raw measurementdata. Our hypothesis is that this combination can maximize diagnostic accuracywhile providing optimal images for both human experts and diagnostic machinelearning models.Quantitative diffusionsgewichtete MRT und Suszeptibilitätskartierung zur Charakterisierung der Gewebemikrostruktur
(Third Party Funds Group – Sub project)Overall project: FOR 5534: Schnelle Kartierung von quantitativen MR bio-Signaturen bei ultra-hohen Magnetfeldstärken
Term: 1. September 2023 - 31. August 2027
Funding source: DFG / Forschungsgruppe (FOR)Dieses Projekt ist Teil der Forschungsgruppe (FOR) "Schnelle Kartierung von quantitativen MR bio-Signaturen bei ultrahohen Magnetfeldstärken". Es konzentriert sich auf die Erweiterung, Beschleunigung und Verbesserung der Diffusions- und quantitativen Suszeptibilitäts-Magnetresonanztomographie. Das Arbeitsprogramm ist in zwei Teile gegliedert. Im ersten Teil wird ein beschleunigtes Protokoll für die klinischen Projekte der FOR vorbereitet. Im zweiten Teil sollen eine weitere Beschleunigung sowie Qualitätsverbesserungen erreicht werden. Konkret werden wir eine lokal niedrigrangig regularisierte echoplanare Bildgebungssequenz für die diffusionsgewichtete Bildgebung implementieren. Sie nutzt Datenredundanzen bei Akquisitionen mit mehreren Diffusionskodierungen, um das Signal-Rausch-Verhältnis effektiv zu erhöhen und damit den Akquisitionsprozess zu beschleunigen. Die Sequenz wird im Wesentlichen beliebige Diffusionskodierungsmöglichkeiten ermöglichen (z.B. b-Tensor-Kodierung). In einem zweiten Schritt werden wir eine verschachtelte Mehrschuss-Version dieser Sequenz entwickeln, um Bildverzerrungen zu reduzieren, die bei der 7-Tesla echoplanaren Bildgebung störend sind. Für die quantitative Suszeptibilitätskartierung (QSM) werden wir eine Sequenz mit einer Stack-of-Stars-Aufnahmetrajektorie implementieren. Da die Magnitudenbilder von Gradientenechosequenzen, die zu unterschiedlichen Echozeiten akquiriert werden, Datenredundanzen aufweisen, die mit denen von diffusionskodierten Bildern vergleichbar sind, werden wir bei der Bildrekonstruktion ebenfalls eine lokale Regularisierung niedrigen Ranges verwenden. Die radialen Trajektorien dieser Sequenz sollten für eine unterabgetastete und damit beschleunigte Bildrekonstruktion gut geeignet sein. In einem zweiten Schritt werden wir die Fähigkeiten unserer Sequenz durch eine quasi-kontinuierliche Echozeit-Abtastung erweitern, bei dem jede Speiche ihre eigene optimierte Echozeit hat. Dies wird eine verbesserte Qualität der QSM ermöglichen, wenn Fett im Bild vorhanden ist, wie es häufig bei Muskeluntersuchungen und in der Brustbildgebung der Fall ist. Bezüglich der QSM-Rekonstruktion werden wir Verfahren des tiefen Lernens entwickeln, um eine qualitativ hochwertige Rekonstruktion mit einer geringeren Menge an Bilddaten als bei herkömmlichen Rekonstruktionsansätzen zu ermöglichen. Wir werden bestehende neuronale Netzwerke von niedrigeren Feldstärken auf 7 T anpassen und deren Fähigkeiten so erweitern, dass wir auch atemzyklusabhängige Feldkarten. Dieses Projekt wird parallele Sendemethoden (pTx) vom pTx-Projekt der FOR erhalten. Wir werden die entwickelten Sequenzen nach dem ersten Jahr an die klinischen Projekte der FOR liefern. Darüber hinaus werden wir wesentliche Auswerte- und Bildrekonstruktionsmethoden an die anderen Projekte der FOR transferieren.und quasi-kontinuierliche Echozeiten in die Rekonstruktion integrieren können.
2021
A comprehensive deep learning framework for MRI reconstruction
(Third Party Funds Single)Term: 1. April 2021 - 31. March 2025
Funding source: National Institutes of Health (NIH)
URL: https://govtribe.com/award/federal-grant-award/project-grant-r01eb029957Learning an Optimized Variational Network for Medical Image Reconstruction
(Third Party Funds Single)Term: since 1. June 2021
Funding source: National Institutes of Health (NIH)
URL: https://grantome.com/grant/NIH/R01-EB024532-03We propose a novel way of reconstructing medical images rooted in deep learning and computer vision that models the process how human radiologists are using years of experience from reading thousands of cases to recognize anatomical structures, pathologies and image artifacts. Our approach is based on the novel idea of a variational network, which embeds a generalized compressed sensing concept within a deep learning framework. We propose to learn a complete reconstruction procedure, including filter kernels and penalty functions to separate between true image content and artifacts, all parameters that normally have to be tuned manually as well as the associated numerical algorithm described by this variational network. The training step is decoupled from the time critical image reconstruction step, which can then be performed in near-real-time without interruption of clinical workflow. Our preliminary patient data from accelerated magnetic resonance imaging (MRI) acquisitions suggest that our learning approach outperforms the state-of-the-art of currently existing image reconstruction methods and is robust with respect to the variations that arise in a daily clinical imaging situation. In our first aim, we will test the hypothesis that learning can be performed such that it is robust against changes in data acquisition. In the second aim, we will answer the question if it is possible to learn a single reconstruction procedure for multiple MR imaging applications. Finally, we will perform a clinical reader study for 300 patients undergoing imaging for internal derangement of the knee. We will compare our proposed approach to a clinical standard reconstruction. Our hypothesis is that our approach will lead to the same clinical diagnosis and patient management decisions when using a 5min exam. The immediate benefit of the project is to bring accelerated imaging to an application with wide public-health impact, thereby improving clinical outcomes and reducing health-care costs. Additionally, the insights gained from the developments in this project will answer the currently most important open questions in the emerging field of machine learning for medical image reconstruction. Finally, given the recent increase of activities in this field, there is a significant demand for a publicly available data repository for raw k-space data that can be used for training and validation. Since all data that will be acquired in this project will be made available to the research community, this project will be a first step to meet this demand.
Public Health RelevanceThe overarching goal of the proposal is to develop a novel machine learning-based image reconstruction approach and validate it for accelerated magnetic resonance imaging (MRI). The approach is able to learn the characteristic appearance of clinical imaging datasets, as well as suppression of artifacts that arise during data acquisition. We will test the hypotheses that learning can be performed such that it is robust against changes in data acquisition, answer the question if it is possible to learn a single reconstruction procedure for multiple MR imaging applications, and validate our approach in a clinical reader study for 300 patients undergoing imaging for internal derangement of the knee.
TR&D 1: Reimagining the Future of Scanning: Intelligent image acquisition, reconstruction, and analysis
(Third Party Funds Single)Term: since 1. August 2021
Funding source: National Institutes of Health (NIH)
URL: https://grantome.com/grant/NIH/P41-EB017183-07-6366The broad mission of our Center for Advanced Imaging Innovation and Research (CAI2R) is to bring together collaborative translational research teams for the development of high-impact biomedical imaging technologies, with the ultimate goal of changing day-to-day clinical practice. Technology Research and Development (TR&D) Project 1 aims to replace traditional complex and inefficient imaging protocols with simple, comprehensive acquisitions that also yield quantitative parameters sensitive to specific disease processes. In the first funding period of this P41 Center, our project team led the way in establishing rapid, continuous, comprehensive imaging methods, which are now available on a growing number of commercial magnetic resonance imaging (MRI) scanners worldwide. This foundation will allow us, in the proposed research plan for the next period, to enrich our data streams, to advance the extraction of actionable information from those data streams, and to feed the resulting information back into the design of our acquisition software and hardware. Thanks to developments during our first funding period, we are now in a position to question long-established assumptions about scanner design, originating from the classical imaging pipeline of human radiologists interpreting multiple series of qualitative images. We will reimagine the process of MR scanning, leveraging our core expertise in pulse-sequence design, parallel imaging, compressed sensing, model-based image reconstruction and machine learning. We will also extend our methods to complex multifaceted data streams, arising not only from MRI but also from Positron Emission Tomography (PET) and other imaging modalities, as well as from diverse arrays of complementary sensors.