Informatik/Technik
Dauerhafte URI für die Sektionhttps://epub.uni-luebeck.de/handle/zhb_hl/4
Listen
Neueste Veröffentlichungen
Item Experimental investigation and validation of CFD simulations of steady flow in stenosis and pharynx using 2D PC-MRI and 4D flow MRI(2025) Gurumurthy, PragathiObstructive sleep apnea (OSA) is a sleep disorder of repetitive disrupted breathing caused by partial or complete closure of the upper airway, despite the effort to breathe. The sleep disorder not only causes social impact on the patient such as daytime sleepiness, fatigue but it has also been linked to several heart conditions. A combination of anatomical variations, impaired neuromuscular functions, ventilatory instability and premature awakening cause OSA. Due to the complex and heterogeneous nature of the disease, the etiology of OSA is not well understood. There are several invasive and non-invasive treatments available for the problem such as uvulopalatopharyngoplasty, maxillomandibular advancement, upper airway stimulation, use of continuous positive airway pressure and dental appliances. However, these have moderate to poor success rate. The identification of the factors contributing to OSA and development of cause driven treatments are not possible with the existing methods. Therefore, more recently numerical simulations or computational fluid dynamics (CFD)is being used to simulate physiological flow to observe the flow phenomena to help identify the problem causing OSA and derive an effective treatment plan. However, the results of the simulations are highly dependent on the mathematical model, boundary conditions, grid size and so on. Hence, a comparison of simulation results with experimental results is important to validate the accuracy of the simulation results. In-vitro phase-contrast magnetic resonance imaging (PC-MRI) based velocity measurements provides a powerful and non-invasive method to acquire spatially registered fluid velocity. This thesis proposes the use of 2D PC-MRI and 4D flow MRI as an investigative and validation tool for CFD of fluid flow in the upper airway during OSA In the current work, two models are chosen for investigation. One is an idealized rigid axisymmeteric stenosis model with 75% occlusion, which is a narrowing in the arteries resulting from plaque build up and also a simplified version of the occlusion occurring in the anatomically complex pharynx model. This model is primarily used to validate the MRI techniques using previously published laser doppler anemometry (LDA) data and also study the effects and progression of atherosclerosis. The second model is an anatomically accurate and OSA patient individual pharynx model to investigate the flow dynamics in the upper airway during OSA using the above validated 2D PC-MRI and 4D flow MRI. The results are used to understand the cause and effects of OSA. Both 2D PC-MRI and 4D flow MRI are used to measure the velocity in both the models at different boundary conditions. The stenosis model is investigated in laminar and turbulent flow condition. The pharynx model is studied at average inspirational and expiration flow rate. In a statistical framework the results of the velocity measurements in the stenosis and pharynx are compared with computational fluid dynamics (CFD) results to validate the numerical simulation results. Also, with the use of 4D flow MRI other pathophysiological parameters such as wall shear stress and recirculation patterns are quantitatively examined, validated with published data and compared with 2D PC-MRI and CFD data. The role of these parameters in atherosclerosis and OSA are also discussed.Item IT-Sicherheit in der Kritischen Infrastruktur BOS-Leitstelle(2025) Christiansen, JensItem Action regulation in energy-efficient driving(2025) Moll, Vivien EstherBattery electric vehicles (BEVs) offer substantial potential for reducing emissions but introduce cognitive and behavioural challenges for energy-efficient driving. In contrast to internal combustion engine vehicles (ICEVs), energy flow in BEVs is less tangible, and relevant consumption patterns are more complex to perceive, predict, and interpret. Current ecodriving research often lacks cognitive grounding, a focus on the specific challenges in BEVs, and a profound analysis beyond performance measures. This dissertation addresses the need for user-centred, cognitively aligned feedback by examining how different feedback approaches affect drivers’ perception, judgements, behaviour, knowledge, and perceived support of action regulation and the mental model of ecodriving. The theoretical foundation integrates adaptive control and action regulation models, cognitive information processing, and the role of mental models and perceived capability in goal-directed behaviour. It posits that energy-efficient driving with BEVs requires continuous situational adaptation and knowledge-based reasoning. Four empirical studies were conducted using experimental designs combined with qualitative and quantitative methods across diverse settings, including an online experiment, driving simulations, and real-world driving. Each study assessed both subjective and objective indicators of action regulation and knowledge. Study 1 (N = 55, online experiment) laid the conceptual foundation by exploring how drivers interpret typical consumption feedback derived from simplified acceleration dynamics. Rooted in bounded rationality, results revealed a systematic overestimation of energy use, particularly for high and brief maximum consumption values. There was no significant correlation between the correct energy efficiency ranking and the ranking derived from participants’ estimations. The study also identified interindividual differences in heuristic information processing, showing that both stimulus properties and cognitive predispositions shape perception. Study 2 (N = 63, driving simulator study) focused on knowledge gaps and their behavioural implications. It contrasted three feedback approaches: a baseline without support, a consumption trace display, and a recommendation system indicating optimal speed. Drivers frequently relied on incomplete or inaccurate conceptions of energy efficiency. While those using the recommendation system felt less uncertain, this confidence did not translate into better performance or more accurate knowledge. However, their tendency to verbalise more vehicle- and environment-related information suggests a more active reasoning process regarding energy-efficient driving. Study 3 (N = 50, field study) built on these findings and introduced a comprehension-based approach with pre-drive tip lists. When behavioural strategies were paired with technical reasoning, drivers reported higher perceived knowledge, stronger support for action regulation and the mental model, and better driving performance. This highlights the potential of explanation-based feedback to improve effectiveness, knowledge, and user experience. Study 4 (N = 112, driving simulator study) extended this approach into real-time driving by integrating elaborated auditory ecodriving tips into a recommendation system. This combined approach significantly improved driving performance and strengthened perceived mental model support, although cognitive load, information acquisition, and subjective information processing awareness were negatively influenced. The dissertation offers novel instruments and methods to evaluate ecodriving feedback. Key contributions include a new experimental paradigm for assessing dynamic magnitude perception, and two new constructs: perceived support of action regulation and perceived support of the mental model, enabling a finer-grained evaluation of action regulation quality beyond conventional usability or satisfaction metrics. Furthermore, existing items for measuring perceived ecodriving knowledge were revised based on theoretical considerations. Finally, an AI-assisted method was employed to systematically analyse verbalised driving strategies and their technical explanations, demonstrating scalable content analysis. Theoretically, the dissertation integrates psychological frameworks with an emphasis on mental models and information processing, provides a systematic literature review, and links various feedback approaches to cognitive processing and behavioural regulation. Moreover, it extends established cognitive biases by identifying a novel bias specific to dynamic data visualisation. Empirically, it demonstrates that comprehension-oriented feedback can improve energy-efficient behaviour, deepen understanding, and enhance perceived support, especially when it explains behavioural strategies and clarifies causal relationships. The practical implications are synthesised into design guidelines for future feedback systems in BEVs and beyond. The innovations in this dissertation extend beyond the context of BEVs. Action regulation in complex and dynamic systems—such as aviation, industrial control, or AI-assisted decision-making, especially in light of the growing role of generative, speech-based AI—can benefit from these findings. When users must form accurate mental models or interpret raw data in real-time, feedback should explain mechanisms and facilitate information analysis rather than merely presenting outcomes. This dissertation lays the groundwork for future research on cognitively aligned feedback systems that foster effective action regulation, adequate mental models, and user experience.Item Ein Hyperthermie-Einsatz zur Integration in ein präklinisches MPI-System mit Erweiterungsraum für Zusatzausstattung(2025-10-13) Behrends, AndréItem Altersgerechte Technikentwicklung(2025) Volkmann, TorbenDie allgegenwärtige und für jeden verständliche Interaktion mit digitalen Technologien stellt eine der zentralen Herausforderungen für die Mensch-Computer-Interaktion-Forschung dar. Dabei wird die Teilhabe aller Gesellschaftsgruppen zunehmend wichtiger, um von den Vorteilen der Digitalisierung profitieren zu können. Insbesondere ältere Erwachsene stehen oft vor Barrieren im Umgang mit digitalen Technologien, weshalb der Erwerb digitaler Kompetenzen und die Förderung lebenslangen Lernens entscheidend für ihre Teilhabe sind. Dies erfordert nicht nur technische Lösungen, sondern auch einen gesellschaftlichen Wandel, um die digitale Spaltung zu verringern und allen Bevölkerungsgruppen, insbesondere in einer alternden Gesellschaft, den Zugang zu digitalen Technologien zu erleichtern. Diese Arbeit zielt darauf ab, sowohl die digitale Teilhabe älterer Erwachsener als auch demokratische Prinzipien wie Inklusion und Gleichberechtigung im Entwicklungsprozess zu stärken. Indem ältere Erwachsene aktiv in die Entwicklung digitaler Technologien einbezogen werden, sollen ihre spezifischen Bedürfnisse und Präferenzen in die Gestaltung der Lösungen einfließen. Der partizipative Ansatz verringert die digitale Spaltung und fördert eine inklusive Gesellschaft, in der alle Bevölkerungsgruppen – unabhängig von Alter oder digitaler Vorerfahrung – gleichermaßen von der fortschreitenden Digitalisierung profitieren können. Ein konkretes Beispiel für die Anwendung dieses partizipativen Ansatzes ist das Historytelling-System, das es älteren Erwachsenen ermöglicht, ihre Lebensgeschichten digital festzuhalten und zu teilen. Um die Forschungsfrage zu beantworten, werden im Folgenden vier zentrale Ergebnisse präsentiert, die den Entwicklungsprozess und die Gestaltungsprinzipien des Systems darlegen. Erstens wird ein erweitertes Modell zur Technologieakzeptanz speziell für ältere Erwachsene präsentiert. Zweitens wird die Entwicklung von Gestaltungsrichtlinien vorgestellt, die altersbedingte Veränderungen berücksichtigen und im Historytelling-System Anwendung finden. Drittens wird ein agiler, partizipativer Technikentwicklungsprozess beschrieben, der die Entwicklung des Historytelling-Systems unterstützt. Viertens wird ein Reflexionsframework entwickelt, das die Akteure, Methoden und Ziele partizipativer Technikentwicklungsprozesse systematisch einordnet. Darauf aufbauend wurde ein Reflexionswerkzeug erstellt, mit dem die Methodendurchführungen der Historytelling-Systementwicklung eingeordnet wurden. Damit leistet diese Arbeit insgesamt einen wichtigen Beitrag zur Gestaltung inklusiver digitaler Technologien und bietet einen Ansatz, der die Teilhabe älterer Erwachsener fördert und gleichzeitig zur digitalen Inklusion in einer alternden Gesellschaft beiträgt.Item KI-gestützte Gewebeanalyse auf Basis der optischen Kohärenztomographie für die Tumorerkennung in der Neurochirurgie(2025) Strenge, PaulHirntumorerkrankungen stellen für Patient:innen und ihr Umfeld eine erhebliche Belastung dar. Der chirurgische Eingriff ist ein zentraler Bestandteil der Therapie, wobei das vollständige Entfernen von Tumorgewebe für das Überleben entscheidend ist. Gleichzeitig erschwert das diffuse Wachstum vieler Tumoren die intraoperative Abgrenzung von gesundem Gewebe, da etablierte Methoden wie MRT oder Fluoreszenzmikroskopie nur eingeschränkt zuverlässig sind. Die optische Kohärenztomographie (OCT) bietet eine kontaktfreie, nichtinvasive Bildgebung mit mikrometergenauer Auflösung und stellt eine vielversprechende Alternative dar. Diese Arbeit untersucht die OCT hinsichtlich ihrer Eignung zur Identifikation von Tumorgewebe und Infiltrationszonen. Grundlage ist ein weltweit einzigartiger Datensatz aus rund 700 pixelweise annotierten OCT-B-Scans, die im Rahmen einer klinischen Studie mit 21 Patient:innen ex-vivo während Resektionen aufgenommen wurden. Zwei OCT-Systeme mit unterschiedlichen Wellenlängen und Auflösungen kamen zum Einsatz. Histologische Schnittbilder wurden neuropathologisch annotiert und durch ein formbasiertes Verfahren auf korrespondierende OCT-B-Scans übertragen. Die Analyse begann mit einem Vergleich der Systeme anhand optischer Gewebeeigenschaften und einer binären Klassifikation zwischen gesundem und tumorösem Gewebe. Während keine signifikanten Unterschiede zwischen den Systemen erkennbar waren, konnte weiße Masse zuverlässig von stark infiltrierter weißer Masse (>60 %) unterschieden werden (Genauigkeit: 91 %). Graue Masse zeigte jedoch hohe Ähnlichkeiten mit Tumorgewebe, was die Genauigkeit bei Einbezug zusätzlicher Gewebetypen auf etwa 60 % reduzierte. Zur Verbesserung wurden strukturelle Eigenschaften einbezogen und sowohl klassische Methoden als auch maschinelles Lernen angewandt. Neuronale Netze ermöglichten eine Klassifikation in drei Klassen (weiße Masse, graue Masse, stark infiltrierte weiße Masse). Mit einem evidenzbasierten Lernansatz konnten Klassifikationsunsicherheiten quantifiziert werden. Für sichere Vorhersagen über alle Methoden hinweg ergaben sich eine Präzision und Sensitivität von jeweils 83 %. Die Ergebnisse belegen das Potenzial der OCT für die intraoperative Tumorerkennung und schaffen eine Grundlage für weitere klinische Forschung.Item Ventricular tachycardia target registration and cardiac motion estimation for stereotactic arrhythmia radioablation(2025) Xie, JingyangVentricular tachycardia (VT) is a severe life-threatening arrhythmia originating in the ventricles, potentially causing sudden cardiac death. Stereotactic arrhythmia radioablation (STAR) is a novel and non-invasive bailout treatment option for refractory VT. The paramount goal of STAR is to precisely deliver focused high-dose radiation beams to treat the VT targets in the heart ventricles while minimizing exposure to the surrounding organs at risk. As a novel therapeutic approach, STAR presents several challenges including VT target transfer from the electroanatomical mapping (EAM) system to the radiation treatment planning system (TPS), as well as cardiac motion estimation of the cardiac clinical target volume (CTV) and American Heart Association (AHA) 17 segments of the left ventricle (LV) for motion management. On the one hand, unlike typical radiotherapy tumors that are easily identifiable in computed tomography (CT) scans, the VT substrate is primarily characterized by its electrophysiological properties which are typically determined through an electrophysiological study using a three-dimensional (3D) EAM procedure. Studies show that freehand delineation of the VT target region on the treatment planning CT, as defined on the EAM surface, has poor inter-observer consistency, even among experienced electrophysiologists and radiation oncologists. Additionally, clinically relevant errors have been reported with such freehand VT target registration method. Therefore, practical VT target registration methods are crucial for accurately transferring the VT target from the EAM system to treatment planning imaging data. On the other hand, as a moving organ, the heart contains respiratory and cardiac motion. During STAR treatments, respiratory motion can be effectively managed with gating, deep inspiration breath-hold or robotic tracking techniques. However, cardiac motion, particularly the movement of the cardiac CTV and the 17 LV segments, poses a significant challenge in the precise definition of cardiac internal target volume (ITV). This movement can lead to misalignments, which may reduce STAR treatment effectiveness and increase the risk of harm to nearby organs at risk via dose wash-out. Estimating cardiac motion is essential for defining an appropriate cardiac ITV margin, thereby enhancing the effectiveness of STAR and patient outcomes. The aim of this dissertation is to investigate four main aspects in the field of STAR: (1) practical methods for VT target registration, (2) validation of these methods using real-world VT patient data, (3) accuracy assessment of target registration methods in the absence of ground truth, and (4) a patient- and segment-specific cardiac motion estimation method. To address (1), a software was developed, which includes three practical semi-automatic VT target registration methods, namely the AHA 17-segment model registration, 3D-3D registration and 2D-3D registration. The AHA 17-segment model registration method divides the LV myocardium structure contoured from cardiac CT into 17 segments according to the AHA 17-segment model, enabling the assessment of targeted LV segment(s) and follow-up studies. The 3D-3D registration method reads vendor-specific EAM raw data and transfers the 3D VT ablation points to the 3D LV contours with respect to the treatment planning imaging data. The 2D-3D registration method is a versatile approach that supports any EAM system and enables the transfer of the VT target region marked on the 2D EAM screenshots in standard anatomical views to the 3D LV contours with respect to the treatment planning imaging data. These three registration methods are semi-automatic rather than fully automatic due to strict accuracy requirements in clinical applications. Automatic registration methods may not be sufficiently reliable, as the LV and aorta structures are derived from different modalities, which can introduce inaccuracies and data incompleteness, making them unsuitable for clinical use. In contrast, the proposed semi-automatic registration methods have demonstrated practical feasibility on real-world STAR datasets. They provide the necessary flexibility, allowing clinicians to refine the registration process based on their expertise and the specific characteristics of each STAR case, ensuring both accuracy and clinical applicability. For aspect (2), the software was successfully validated as a quality assurance tool in the STAR treatment planning procedure for 5 VT cases within the German RAVENTA trial. Particularly, the 2D-3D registration method eliminates the need for interpreting proprietary formats exported from different EAM systems. The semi-automated VT target registration methods enable quality assurance of the manually transferred cardiac CTV, reducing clinician-dependent inconsistencies and enhancing the safety and robustness of the VT target registration. Additionally, retrospective findings of incorrectly transferred VT target could potentially help explain VT recurrences. In a cross-validation study addressing (3), the proposed 2D-3D registration method and the 3D-3D registration method from the 3D Slicer extension EAMapReader outputted nearly identical cardiac CTV structures. This result indicates that both methods are suitable for quality assurance and VT target transfer to avoid mistargeting and provide standardized workflows. Finally, regarding aspect (4), this dissertation presents an electrocardiogram-gated cardiac CT-based patient- and segment-specific cardiac motion estimation method using the intensity-based non-rigid automatic image registration in STAR for VT. The method was utilized on case data from 10 STAR-treated VT patients, and the estimated cardiac motion demonstrated considerable individual variability in cardiac CTVs and 17 LV segments across different VT patients, highlighting the need for individualized cardiac ITV margins and motion management strategies to enhance accuracy and effectiveness in STAR. Additionally, this analysis provides reference data on cardiac motion for STAR treatment planning in VT patients. This method has been integrated into the proposed software as a module. In summary, three practical semi-automatic VT target registration methods were developed and validated, and a patient- and segment-specific cardiac motion estimation method was proposed. These methods bridge the gap between EAM systems and radiation TPS, enhancing STAR performance and improving VT patient outcomes, with potential for future clinical applications.Item The Role of eco-driving feedback displays in drivers’ information processing and energy efficiency in electric vehicles(2025) Gödker, MarkusIn the context of the transition to sustainable transportation, understanding the cognitive mechanisms that underlie energy-efficient driver behavior is critical. This cumulative dissertation investigates how ecodriving feedback displays influence drivers’ information processing and achieved energy efficiency in battery electric vehicles. The main objective is to explain the psychological processes underlying operational (maneuver-based) ecodriving and to identify how ecodriving feedback displays can effectively support the acquisition of energy-related comprehension and improve driving behavior. Grounded in theories from engineering psychology and human factors, this work introduces and empirically validates the construct of Energy Dynamics Awareness (EnDynA)—a domain-specific adaptation of situation awareness tailored to electric vehicle driving. EnDynA captures drivers’ awareness of current and anticipated energy flows and is a cognitive foundation for energy-efficient real-time decision-making. The dissertation comprises four empirical articles combining online and driving simulator studies. Article 1 introduces the concept EnDynA and its assessment through subjective (experienced EnDynA) and objective (actual EnDynA) measures. The article demonstrates that feedback displays with higher informational value—such as instantaneous consumption displays extended with distance-based information—significantly improve experienced EnDynA. Article 2 extends this approach using a mental workload manipulation and a novel self-controlled occlusion paradigm. Results reveal that increased workload reduces visual attention to energy displays and impairs actual EnDynA, underscoring the role of attentional resources. Article 3 shows in a repeated-trials simulator experiment that richer feedback improves experienced EnDynA and leads to measurable gains in operational ecodriving performance. Article 4 compares instantaneous and predictive feedback systems and reveals a moderating effect of situation complexity: conventional feedback facilitates experiential learning under low complexity, whereas predictive guidance is more effective in high-demand conditions. Together, the studies provide converging evidence that ecodriving feedback displays can support drivers’ cognitive processing, learning, and behavior, particularly when designed to match informational needs and situational demands. Theoretically, the work contributes a domain-specific extension of situation awareness theory, called EnDynA. Methodologically, it introduces and refines tools for assessing energy-related awareness, attention, and behavior. Practically, it formulates actionable design recommendations for adaptive feedback systems in electric mobility. In sum, this dissertation shows that ecodriving feedback displays, when designed with psychological theory in mind, can close the cognitive information processing loop between perception, comprehension, and action in electric vehicle driving. By fostering EnDynA, such systems enable drivers to regulate energy use more effectively, contributing to improved driver performance, enhanced user experience, and the broader goals of sustainable mobility.Item Einsatz von Nanotechnologien in der Präzisionsmedizin(2024) Wendt, ReginePräzisionsmedizin berücksichtigt individuelle genetische, umweltbedingte und lebensstilbezogene Unterschiede, um Diagnosen, Präventionsstrategien und Behandlungen individuell maßzuschneidern. Die Einführung von Nanotechnologien in der Präzisionsmedizin bietet Chancen zur Verbesserung personalisierter und zielgerichteter Ansätze für Diagnose, Therapie und Forschung. Nanotechnologien haben das Potenzial, Krankheiten im Körper auf molekularer Ebene zu erkennen, zu überwachen und zu behandeln. Eine besonders relevante Nanotechnologie sind Nanogeräte. Nanogeräte sind miniaturisierte elektronische, biologische oder biohybride Geräte mit nanoskaligen Komponenten, die seit den frühen 2000er Jahren im medizinischen Kontext Beachtung finden. Die Integration von Nanotechnologien und Nanogeräten in die medizinische Forschung und Praxis birgt Herausforderungen. Die vorliegende Arbeit adressiert diese Herausforderungen und bietet innovative Strategien zur Verbesserung der Wirksamkeit und Anwendbarkeit von Nanotechnologien in der Medizin. Es mangelt beispielsweise an umfassenden Konzepten für die Konstruktion und die effektive Nutzung von Nanogeräten. Ein vielversprechender und universeller Lösungsansatz hierfür sind DNA-Nanonetzwerke auf Basis von DNA-Tile-Nanorobotern. Diese werden vorgestellt, und anhand eines Beispielszenarios zur Krankheitserkennung in vitro wird ihre praktische Anwendung veranschaulicht und ihre Potenziale werden aufgezeigt. Da derzeit wenige Nanogeräte in vivo getestet werden können, ermöglichen vor allem Simulationen die Theorien und Konzepte für die Entwicklung von Nanogeräten zu überprüfen und anzupassen. Die realistische Simulation von Nanogeräten in ihrem Einsatzbereich dem menschlichen Körper ist demnach unerlässlich, um Forschungshypothesen zu evaluieren und den Fortschritt in diesem Bereich zu beschleunigen. Obwohl Ansätze zur Simulation existieren, fehlt eine ganzheitliche Architektur, um komplexe Szenarien realistisch abzubilden. Die Arbeit präsentiert eine umfassende Simulationsarchitektur namens MEHLISSA. Diese Architektur ermöglicht die Modellierung medizinischer Nanonetzwerke auf verschiedenen Ebenen, von der Körperebene, über Organ- und Kapillar- bis zur Zellebene. Es werden vier relevante Szenarien modelliert, um die Vorteile der Simulation und den Einsatz von Nanogeräten zu demonstrieren. Diese Szenarien umfassen die Verbesserung medizinischer Maßnahmen durch individuelle Körpermodelle als Basis für Digital Twins, die kontinuierliche Gesundheitsüberwachung mittels Nanogeräten im Blutkreislauf, die Prävention von Metastasen durch Nanogeräte sowie die Anwendung der Liquid Biopsy in vivo zur Krebsdiagnostik und -überwachung. Die Simulationen zeigen vielversprechende Ergebnisse, darunter die zuverlässige Bestimmung von Schwellenwerten für relevante Marker im Körper und die Detektion von Kleinstmengen an ctDNA durch Nanogeräte. Sie demonstrieren den potenziellen Nutzen von Nanogeräten zur Verbesserung der Diagnostik und Behandlung von Krankheiten. Ein weiteres ungeklärtes Problem ist die genaue Lokalisierung von Nanogeräten und Krankheitsmarkern, die eine wichtige Basis für eine verbesserte Krankheitserkennung und gezielte Medikamentenabgabe sind. Hierfür wird ein neuer Ansatz basierend auf lokaler Mustererkennung und individuellen Proteom-Fingerprints wichtiger Organe und Gewebe entwickelt. Durch die Kombination von Proteom-Fingerprinting und DNA-Nanonetzwerken wird eine präzise Lokalisierung von Krankheitsmarkern ermöglicht. Die Simulationen belegen die Wirksamkeit der Methode in neun wesentlichen Organen innerhalb weniger Minuten. Die Ergebnisse legen nahe, dass die Kombination aus Proteom-Fingerprinting und DNA-Nanonetzwerken entscheidend für eine genauere und schnellere Diagnose sowie Behandlung sein könnte. Diese Dissertation trägt somit zur Erforschung und Anwendung von Nanotechnologien in der Medizin bei, indem sie sowohl theoretische Grundlagen als auch praktische Anwendungen und Simulationen liefert. Es wird deutlich, dass der Einsatz von Nanotechnologien in der Medizin zu einer verbesserten Früherkennung und Behandlung von Krankheiten führen kann.Item Towards understanding convolutional neural networks through visualization and systematic simplification(2025) Linse, ChristophBlack-box systems like Convolutional Neural Networks (CNNs) have transformed the field of computer vision. While visualization tools have helped explore and explain CNNs, their inner workings remain opaque, particularly how they detect specific features. As deep learning applications become more widespread across various fields, it becomes crucial to understand these models. This understanding is needed to avoid misinterpretation and bias, which can seriously affect society. This research motivates holistic visualization approaches, which show various aspects of CNNs. Existing visualizations often focus on a few aspects, answering specific questions. Combining them in comprehensive software could provide a more holistic view of CNNs and their inner processes. While 2D space cannot present all relevant information due to screen size restrictions, 3D environments offer new representation and interaction opportunities. Therefore, we enable the visualization of large CNNs in a virtual 3D space. This work further contributes to the visualization field by improving the activation maximization method for feature visualization, which previously struggled with local maxima. In addition to visualization, this research increases CNN transparency through systematic simplification. We use pre-defined convolution filters from traditional image processing in modern CNN architectures. Instead of changing the filters during training, the training process finds linear combinations of the pre-defined filter outputs. Our Pre-defined Filter Convolutional Neural Networks (PFCNNs) with nine distinct edge and line detectors generalize better than standard CNNs, especially on smaller datasets. For ResNet18, we observed increased test accuracies ranging from 5-11 percentage points with the same number of trainable parameters across the Fine-Grained Visual Classification of Aircraft, StanfordCars, Caltech-UCSD Birds-200-2011, and the 102 Category Flower dataset. The results imply that many image recognition problems do not require training the convolution kernels. For practical use, PFCNNs can even save trainable weights.Item Stufenbasierte Automation zur Unterstützung der Führungsprozesse von Einsatzorganisationen am Beispiel der präklinischen Notfallrettung(2025) Berndt, Henrik MatthiasDer Rettungsdienst steht mitten in einer digitalen Transformation. Diese manifestiert sich derzeit vor allem in der zunehmenden Verwendung mobiler, Tablet-basierter Anwendungssysteme für die digitale Einsatzdokumentation, wird jedoch in der Zukunft weitreichender sein. Ein besonderes Szenario ist der Massenanfall von Verletzten, bei dem aufgrund eines anfänglichen Mangels an Behandlungskapazitäten eine Abkehr von Routineprozeduren notwendig ist. Insbesondere die Seltenheit, Komplexität und Dynamik machen Massenanfälle größerer Dimension zu einer Herausforderung vor allem für die Leitungskräfte. Diese müssen die Situation gut erfassen und verstehen, Organisationsstrukturen ad-hoc installieren und die vor Ort befindlichen und eintreffenden Rettungskräfte effektiv, effizient und sicherheitsorientiert einsetzen. In dieser Arbeit wird untersucht, ob und wie Leitungskräfte in einem digitalisierten Rettungsdienst bei Massenanfällen mithilfe von Automation unterstützt werden können. Anhand einer gründlichen Analyse und mit Blick auf den aktuellen Stand der Technik werden bestehende Probleme in Hinblick auf Effektivität und Effizienz identifiziert. Auf dieser Basis wird ein prototypisches Anwendungssystem für die Einsatzleitung konzipiert und implementiert, das die festgestellten Herausforderungen mit Automation zu lösen versucht. Im Rahmen der Entwicklung werden wissenschaftliche Konstrukte wie „Situation Awareness“ und „Gebrauchstauglichkeit“ betrachtet und eingeordnet. In Bezug auf die Automation werden bestehende Stufenmodelle untersucht und zusammengeführt. Die vorliegende Forschungsarbeit kommt zu der Erkenntnis, dass günstige Automationsstufen in Massenanfällen nicht allgemeingültig definiert werden sollten, sondern vielmehr von den Aufgaben und der Situation abhängen. Mit dem Ziel einer verständlichen Automationsfunktionalität wird ein Modell mit vier Automationsstufen entwickelt und implementiert, das neben manueller Kontrolle, zwei Stufen der Teilautomation und eine Vollautomation mit Information an den Benutzer beinhaltet. In einer summativen Evaluation mit Leitungskräften des Rettungsdienstes werden das System und insbesondere die Automationsfunktionen in Bezug auf die Gebrauchstauglichkeit, Nützlichkeit sowie die Situation Awareness der Benutzer untersucht.Item Security analysis of confidential VMs on modern server architectures(2025-06-24) Wilke, LucaCloud computing has transformed data management and IT practices for organizations and individuals alike, offering unmatched scalability, flexibility, and cost-efficiency. However, it comes with privacy concerns, as the cloud service providers can access all processed data. Trusted Execution Environments (TEEs) are one potential solution, offering a new form of isolation that even locks out the infrastructure operator. Attacks from any software component outside the TEE are thwarted by novel access restrictions while physical attacks are prevented by memory encryption. Even the operating system or hypervisor cannot overcome these restrictions. With Intel SGX, Intel TDX, and AMD SEV-SNP, both major x86 CPU vendors offer TEEs on their server CPUs. This thesis scrutinizes the extent to which the current TEE generation delivers on their security promises. We start this thesis by describing the isolation mechanisms implemented by SGX, TDX, and SEV-SNP. Building on these insights, we demonstrate that the trend to use deterministic memory encryption without integrity or freshness has several shortcomings. We show that monitoring deterministic ciphertexts for changes allows leaking information about the plaintext, which we exploit on SEV-SNP. SGX and TDX prevent straightforward exploitation by restricting software attackers from reading and writing the ciphertext, while SEV-SNP only restricts writing. Next, we challenge the security of such access restrictions by showing that an attacker with brief physical access to the memory modules can create aliases in the address space that bypass these safeguards. We exploit this on SEV-SNP to re-enable write access for software attackers, culminating in a devastating attack that forges attestation reports, undermining all trust in SEV-SNP. On SGX and TDX, such attacks are mitigated by a dedicated alias check at boot time. Finally, we examine the security of VM-based TEEs against single-stepping attacks, which allow instruction-granular tracing and have led to numerous high-stakes attacks on SGX. We show that SEV-SNP is also vulnerable to single-stepping and provide a software framework enabling easy access to single-stepping on SEV for future research. Next, we analyze the single-stepping security of Intel TDX, which comes with a built-in mitigation comprising a detection heuristic and a prevention mode. We uncover a flaw in the heuristic that stops the activation of the prevention mode, thereby re-enabling single-stepping on TDX. Furthermore, we unveil an inherent flaw in the prevention mode that leaks fine-grained information about the control flow.Item Integrated methodology for enhanced low-dose PET imaging(2025) Elmoujarkach, Ezzat A.Item Novel Machine Learning Methods for Video Understanding and Medical Analysis(2025-06-26) Hu, YaxinArtificial intelligence has developed rapidly over the past decade and has penetrated into nearly every aspect of life. New applications in areas such as human-computer interaction, virtual reality, autonomous driving and intelligent medical systems have emerged in large numbers. Video is a kind of high-dimensional data, which has one more dimension than images, requiring more computing resources. As more and more high-quality large-scale video datasets are released, video understanding has become a cutting-edge research direction in the computer vision community. Action recognition is one of the most important tasks in video understanding. There are many successful network architectures for video action recognition. In our work, we focus on proposing new designs and architectures for video understanding and investigating their applications in medicine. We introduce a novel RGBt sampling strategy to fuse temporal information into single frames without increasing the computational load and explore different color sampling strategies to further improve network performance. We find that frames with temporal information obtained by fusing the green channels from different frames achieve the best results. We use tubes of different sizes to embed richer temporal information into tokens without increasing the computational load. We also introduce a novel bio-inspired neuron model, the MinBlock, to make the network more information selective. Furthermore, we propose a spatiotemporal architecture that slices videos in space-time and thus enables 2D-CNNs to directly extract temporal information. All the above methods are evaluated on at least two benchmark datasets and all perform better than the baselines. We also focus on applying our networks in medicine. We use our slicing 2D-CNN architecture for glaucoma and visual impairments analysis. And we find that visual impairments may affect walking patterns of humans thus making the video analysis relevant for diagnosis. We also design a machine learning model to diagnose psychosis and show that it is possible to predict whether clinical high-risk patients would actually develop a psychosis.Item Non-invasive estimation of respiratory effort(2025) Graßhoff, JanItem Cutting-edge precision(2025) Erben, NiclasItem Survival and parasite spread in a spatial host-parasite model with host immunity(2025) Franck, Sascha JosefWe introduce a stochastic model for the invasion of a parasite population in a spatially structured host population, which includes an individual-based adaptive immune response. We will call this the "Spatial Infection Model with Host Immunity" or SIMI for short. In the SIMI, parasites move as independent simple random walks on a graph until they meet a vertex that is inhabited by a host. With a given probability, the host repels the infection, kills the parasite, and adapts its probability to repel the next infection. After a successful infection attempt, both the host and the attacking parasite die, and only the parasite leaves a random number of offspring. We study the SIMI on the integer line and show that parasites have a positive survival probability if and only if the mean offspring are greater than the mean number of needed infection attempts. Furthermore, we study the speed at which the parasites invade the host population. If the probability of a host after repelling an infection, to also repel the next one does not grow fast enough, then parasites propagate across the host population at a linear speed. However, if that probability grows quickly enough, the propagation speed is polynomial with exponent less than 1. Finally, we investigate the SIMI on higher-dimensional graphs with hosts that are either totally immune and never get infected or get infected in the first attempt. We show that the survival probability undergoes a phase transition in the frequency of totally immune hosts.Item Invariant integration for prior-knowledge enhanced deep learning architectures(2025) Rath, MatthiasIncorporating prior knowledge to Deep Neural Networks is a promising approach to improve their sample efficiency by effectively limiting the search space the learning algorithm needs to cover. This reduces the amount of samples a network needs to be trained on to reach a specific performance. Geometrical prior knowledge is the knowledge about input transformations that affect the output in a predictable way, or not at all. It can be built into Deep Neural Networks in a mathematically sound manner by enforcing in- or equivariance. Equivariance is the property of a map to behave predictably under input transformations. Convolutions are an example for a translation-equivariant map, where a translation of the input results in a shifted output. Group-equivariant convolutions are a generalization achieving equivariance towards more general transformation groups such as rotations or flips. Using group-equivariant convolutions within Neural Networks embeds the desired equivariance in addition to translations. Invariance is a closely related concept, where the output of a function does not change when its input is transformed. Invariance is often a desirable property of a feature extractor in the context of classification. While the extracted features need to encode the information required to discriminate between different classes, they should be invariant to intra-class variations, i.e., to transformations that map samples within the same class subspace. In the context of Deep Neural Networks, the required invariant representations can be obtained with mathematical guarantees by applying group-equivariant convolutions followed by globally pooling among the group- and spatial domain. While pooling guarantees invariance, it also discards information and is thus not ideal. In this dissertation, we investigate the transition from equi- to invariance within Deep Neural Networks that leverage geometrical prior knowledge. Therefore, we replace the spatial pooling operation with Invariant Integration, a method that guarantees invariance while adding targeted model capacity rather than destroying information. We first propose an Invariant Integration Layer for rotations based on the group average calculated with monomials. The layer can be readily used within a Neural Network and allows backpropagating through it. The monomial parameters are selected either by iteratively optimizing the least-squared-error of a linear classifier, or based on neural network pruning methods. We then replace the monomials with functions that are more often encountered in the context of Neural Networks such as learnable weighted sums or self-attention. We thereby streamline the training procedure of Neural Networks enhanced with Invariant Integration. Finally, we expand Invariant Integration towards flips and scales, highlighting the universality of our approach. We further propose a multi-stream architecture that is able to leverage invariance to multiple transformations at once. This approach allows us to efficiently combine multiple invariances and select the best-fit invariant solution for the specific problem to solve. The conducted experiments show that applying Invariant Integration in combination with group-equivariant convolutions significantly boosts the sample efficiency of Deep Neural Networks improving the performance when the amount of available training data is limited.