Informatik/Technik
Dauerhafte URI für die Sektionhttps://epub.uni-luebeck.de/handle/zhb_hl/4
Listen
Auflistung Informatik/Technik nach Instituten/Kliniken "Institut für Medizinische Informatik"
Gerade angezeigt 1 - 17 von 17
- Treffer pro Seite
- Sortieroptionen
Item Advanced sensor fusion methods with applications to localization and navigation(2025-03-18) Fetzer, ToniWe use sensors to track how many steps we take during the day or how well we sleep. Sensor fusion methods are used to draw these conclusions. A particularly difficult application is indoor localization, i.e. finding a person’s position within a building. This is mainly due to the many degrees of freedom of human movement and the physical properties of sensors inside buildings. Suitable approaches for sensor fusion for the purpose of self-localization using a smartphone are the subject of this thesis. To best address the complexity of this problem, a non-linear and non-Gaussian distributed state space must be assumed. For the required position estimation, we therefore focus on the class of particle filters and build a novel generic filter framework on top of it. The special feature of this framework is the modular approach and the low requirements towards the sensor and movement models. In this work, we investigate models for Wi-Fi and Bluetooth RSSI measurements using radio propagation models, the relatively new standard Wi-Fi FTM, which is explicitly designed for localization purposes, the barometer to determine floor changes as accurately as possible, and activity recognition to find out what the pedestrian is doing, e.g., ascending stairs. The human motion is then modeled in a movement model using IMU data. Here we propose two approaches: a regular tessellated grid graph and an irregular tessellated navigation mesh. From these we formulate our proposal for an indoor localization system (ILS). However, some fundamental problems of the particle filter lead to critical errors. These can be a multi- modal density to be estimated, unbalanced sensor models or the so-called sample impoverish- ment. Compensation, or in the best case elimination, of these errors by advanced sensor fusion methods is the main contribution of this thesis. The most important approach in this context is our adaptation of an interacting multiple modal particle filter (IMMPF) to the requirements of indoor localization. This results in a completely new approach to the formulation of an ILS. Using quality metrics, it is possible to dynamically switch between arbitrarily formulated par- ticle filters running in parallel. Furthermore, we explicitly propose several approaches from the field of particle distribution optimization (PDO) to avoid the sample impoverishment problem. In particular, the support filter approach (SFA), which is also based on the IMMPF principle, leads to excellent position estimates even under the most difficult conditions, as extensive ex- periments show.Item Architectures and optimisation for learning-based medical image registration(2024) Siebert, HannaItem Decision forest variants for brain lesion segmentation(2017) Maier, OskarItem Item Deskriptorlernen in Medizinischen Volumendaten(2021) Blendowski, Maximilian ConstantinItem Direct volume rendering methods for needle insertion simulation(2016) Fortmeier, DirkItem Item Fast image registration for image-guided interventions(2022) Ha, In YoungItem Generalizing deep Learning methods for volumetric medical image analysis(2025) Weihsbach, ChristianThe emergence of volumetric CT and MRI imaging technologies has dramatically improved clinical diagnostics and research, enabling visualization of body parts and organs in three dimensions. Deep learning, with its fundamental principles invented in the last century, has become a de facto standard for the automated processing of medical images, supporting clinicians in image interpretation and diagnosis. However, despite their widespread success, deep learning methods often achieve inferior results when applied in clinical practice compared to the training stage. This drop in performance is caused by the shifted properties of the images used during the deep learning models’ training and the images encountered later at the time of inference, combined with the models’ insufficient generalization capabilities. The shift in data properties to which the models fail to generalize may not be foreseen, and problematic image differences for the deep learning algorithms may be invisible to the human eye and not understandable by well-trained radiologists who can reliably diagnose patients’ conditions. In this thesis, four methods for volumetric medical imaging are presented that reliably generalize. It is researched in which areas and on which levels the generalization for volumetric medical images can be enabled and improved. The developed methods cover various fields of application, such as cardiac, abdominal, spinal, and brain volumetric medical imaging. Generalization was enabled by modeling acquisition processes for cardiac shape reconstruction, by effectively combining generalization and adaptation paradigms to overcome CT to MRI image intensity differences, by harnessing image registration in combination with loss-based modifications for generalizing segmentation of brain tumors across differently weighted MRI images, and by model parameter design modifications targeting the inner units of deep learning architecture to infer results from rotated or reflected input data reliably. All methods proved to work even for small-scale datasets with far less than one hundred samples, proving the efficiency of the methodological contributions as an alternative to following the trend of increasing dataset sizes and along with additional computational effort during training.Item KI-gestützte Gewebeanalyse auf Basis der optischen Kohärenztomographie für die Tumorerkennung in der Neurochirurgie(2025) Strenge, PaulHirntumorerkrankungen stellen für Patient:innen und ihr Umfeld eine erhebliche Belastung dar. Der chirurgische Eingriff ist ein zentraler Bestandteil der Therapie, wobei das vollständige Entfernen von Tumorgewebe für das Überleben entscheidend ist. Gleichzeitig erschwert das diffuse Wachstum vieler Tumoren die intraoperative Abgrenzung von gesundem Gewebe, da etablierte Methoden wie MRT oder Fluoreszenzmikroskopie nur eingeschränkt zuverlässig sind. Die optische Kohärenztomographie (OCT) bietet eine kontaktfreie, nichtinvasive Bildgebung mit mikrometergenauer Auflösung und stellt eine vielversprechende Alternative dar. Diese Arbeit untersucht die OCT hinsichtlich ihrer Eignung zur Identifikation von Tumorgewebe und Infiltrationszonen. Grundlage ist ein weltweit einzigartiger Datensatz aus rund 700 pixelweise annotierten OCT-B-Scans, die im Rahmen einer klinischen Studie mit 21 Patient:innen ex-vivo während Resektionen aufgenommen wurden. Zwei OCT-Systeme mit unterschiedlichen Wellenlängen und Auflösungen kamen zum Einsatz. Histologische Schnittbilder wurden neuropathologisch annotiert und durch ein formbasiertes Verfahren auf korrespondierende OCT-B-Scans übertragen. Die Analyse begann mit einem Vergleich der Systeme anhand optischer Gewebeeigenschaften und einer binären Klassifikation zwischen gesundem und tumorösem Gewebe. Während keine signifikanten Unterschiede zwischen den Systemen erkennbar waren, konnte weiße Masse zuverlässig von stark infiltrierter weißer Masse (>60 %) unterschieden werden (Genauigkeit: 91 %). Graue Masse zeigte jedoch hohe Ähnlichkeiten mit Tumorgewebe, was die Genauigkeit bei Einbezug zusätzlicher Gewebetypen auf etwa 60 % reduzierte. Zur Verbesserung wurden strukturelle Eigenschaften einbezogen und sowohl klassische Methoden als auch maschinelles Lernen angewandt. Neuronale Netze ermöglichten eine Klassifikation in drei Klassen (weiße Masse, graue Masse, stark infiltrierte weiße Masse). Mit einem evidenzbasierten Lernansatz konnten Klassifikationsunsicherheiten quantifiziert werden. Für sichere Vorhersagen über alle Methoden hinweg ergaben sich eine Präzision und Sensitivität von jeweils 83 %. Die Ergebnisse belegen das Potenzial der OCT für die intraoperative Tumorerkennung und schaffen eine Grundlage für weitere klinische Forschung.Item Multimodal sensor data analysis for the investigation of physical and mental states using machine learning(2024) Irshad, Muhammad TausifItem Point clouds and keypoint graphs in 3D deep learning for medical image analysis(2023) Hansen, LasseItem Prior-guided 3D deep learning for point cloud analysis in medicine under domain shifts(2024) Bigalke, AlexanderItem Standardisierte Metadatenintegration für die Sekundärnutzung klinischer Daten(2022) Ulrich, Hannes