Informatik/Technik
Dauerhafte URI für die Sektionhttps://epub.uni-luebeck.de/handle/zhb_hl/4
Listen
Neueste Veröffentlichungen
Item The Role of eco-driving feedback displays in drivers’ information processing and energy efficiency in electric vehicles(2025) Gödker, MarkusIn the context of the transition to sustainable transportation, understanding the cognitive mechanisms that underlie energy-efficient driver behavior is critical. This cumulative dissertation investigates how ecodriving feedback displays influence drivers’ information processing and achieved energy efficiency in battery electric vehicles. The main objective is to explain the psychological processes underlying operational (maneuver-based) ecodriving and to identify how ecodriving feedback displays can effectively support the acquisition of energy-related comprehension and improve driving behavior. Grounded in theories from engineering psychology and human factors, this work introduces and empirically validates the construct of Energy Dynamics Awareness (EnDynA)—a domain-specific adaptation of situation awareness tailored to electric vehicle driving. EnDynA captures drivers’ awareness of current and anticipated energy flows and is a cognitive foundation for energy-efficient real-time decision-making. The dissertation comprises four empirical articles combining online and driving simulator studies. Article 1 introduces the concept EnDynA and its assessment through subjective (experienced EnDynA) and objective (actual EnDynA) measures. The article demonstrates that feedback displays with higher informational value—such as instantaneous consumption displays extended with distance-based information—significantly improve experienced EnDynA. Article 2 extends this approach using a mental workload manipulation and a novel self-controlled occlusion paradigm. Results reveal that increased workload reduces visual attention to energy displays and impairs actual EnDynA, underscoring the role of attentional resources. Article 3 shows in a repeated-trials simulator experiment that richer feedback improves experienced EnDynA and leads to measurable gains in operational ecodriving performance. Article 4 compares instantaneous and predictive feedback systems and reveals a moderating effect of situation complexity: conventional feedback facilitates experiential learning under low complexity, whereas predictive guidance is more effective in high-demand conditions. Together, the studies provide converging evidence that ecodriving feedback displays can support drivers’ cognitive processing, learning, and behavior, particularly when designed to match informational needs and situational demands. Theoretically, the work contributes a domain-specific extension of situation awareness theory, called EnDynA. Methodologically, it introduces and refines tools for assessing energy-related awareness, attention, and behavior. Practically, it formulates actionable design recommendations for adaptive feedback systems in electric mobility. In sum, this dissertation shows that ecodriving feedback displays, when designed with psychological theory in mind, can close the cognitive information processing loop between perception, comprehension, and action in electric vehicle driving. By fostering EnDynA, such systems enable drivers to regulate energy use more effectively, contributing to improved driver performance, enhanced user experience, and the broader goals of sustainable mobility.Item Einsatz von Nanotechnologien in der Präzisionsmedizin(2024) Wendt, ReginePräzisionsmedizin berücksichtigt individuelle genetische, umweltbedingte und lebensstilbezogene Unterschiede, um Diagnosen, Präventionsstrategien und Behandlungen individuell maßzuschneidern. Die Einführung von Nanotechnologien in der Präzisionsmedizin bietet Chancen zur Verbesserung personalisierter und zielgerichteter Ansätze für Diagnose, Therapie und Forschung. Nanotechnologien haben das Potenzial, Krankheiten im Körper auf molekularer Ebene zu erkennen, zu überwachen und zu behandeln. Eine besonders relevante Nanotechnologie sind Nanogeräte. Nanogeräte sind miniaturisierte elektronische, biologische oder biohybride Geräte mit nanoskaligen Komponenten, die seit den frühen 2000er Jahren im medizinischen Kontext Beachtung finden. Die Integration von Nanotechnologien und Nanogeräten in die medizinische Forschung und Praxis birgt Herausforderungen. Die vorliegende Arbeit adressiert diese Herausforderungen und bietet innovative Strategien zur Verbesserung der Wirksamkeit und Anwendbarkeit von Nanotechnologien in der Medizin. Es mangelt beispielsweise an umfassenden Konzepten für die Konstruktion und die effektive Nutzung von Nanogeräten. Ein vielversprechender und universeller Lösungsansatz hierfür sind DNA-Nanonetzwerke auf Basis von DNA-Tile-Nanorobotern. Diese werden vorgestellt, und anhand eines Beispielszenarios zur Krankheitserkennung in vitro wird ihre praktische Anwendung veranschaulicht und ihre Potenziale werden aufgezeigt. Da derzeit wenige Nanogeräte in vivo getestet werden können, ermöglichen vor allem Simulationen die Theorien und Konzepte für die Entwicklung von Nanogeräten zu überprüfen und anzupassen. Die realistische Simulation von Nanogeräten in ihrem Einsatzbereich dem menschlichen Körper ist demnach unerlässlich, um Forschungshypothesen zu evaluieren und den Fortschritt in diesem Bereich zu beschleunigen. Obwohl Ansätze zur Simulation existieren, fehlt eine ganzheitliche Architektur, um komplexe Szenarien realistisch abzubilden. Die Arbeit präsentiert eine umfassende Simulationsarchitektur namens MEHLISSA. Diese Architektur ermöglicht die Modellierung medizinischer Nanonetzwerke auf verschiedenen Ebenen, von der Körperebene, über Organ- und Kapillar- bis zur Zellebene. Es werden vier relevante Szenarien modelliert, um die Vorteile der Simulation und den Einsatz von Nanogeräten zu demonstrieren. Diese Szenarien umfassen die Verbesserung medizinischer Maßnahmen durch individuelle Körpermodelle als Basis für Digital Twins, die kontinuierliche Gesundheitsüberwachung mittels Nanogeräten im Blutkreislauf, die Prävention von Metastasen durch Nanogeräte sowie die Anwendung der Liquid Biopsy in vivo zur Krebsdiagnostik und -überwachung. Die Simulationen zeigen vielversprechende Ergebnisse, darunter die zuverlässige Bestimmung von Schwellenwerten für relevante Marker im Körper und die Detektion von Kleinstmengen an ctDNA durch Nanogeräte. Sie demonstrieren den potenziellen Nutzen von Nanogeräten zur Verbesserung der Diagnostik und Behandlung von Krankheiten. Ein weiteres ungeklärtes Problem ist die genaue Lokalisierung von Nanogeräten und Krankheitsmarkern, die eine wichtige Basis für eine verbesserte Krankheitserkennung und gezielte Medikamentenabgabe sind. Hierfür wird ein neuer Ansatz basierend auf lokaler Mustererkennung und individuellen Proteom-Fingerprints wichtiger Organe und Gewebe entwickelt. Durch die Kombination von Proteom-Fingerprinting und DNA-Nanonetzwerken wird eine präzise Lokalisierung von Krankheitsmarkern ermöglicht. Die Simulationen belegen die Wirksamkeit der Methode in neun wesentlichen Organen innerhalb weniger Minuten. Die Ergebnisse legen nahe, dass die Kombination aus Proteom-Fingerprinting und DNA-Nanonetzwerken entscheidend für eine genauere und schnellere Diagnose sowie Behandlung sein könnte. Diese Dissertation trägt somit zur Erforschung und Anwendung von Nanotechnologien in der Medizin bei, indem sie sowohl theoretische Grundlagen als auch praktische Anwendungen und Simulationen liefert. Es wird deutlich, dass der Einsatz von Nanotechnologien in der Medizin zu einer verbesserten Früherkennung und Behandlung von Krankheiten führen kann.Item Towards understanding convolutional neural networks through visualization and systematic simplification(2025) Linse, ChristophBlack-box systems like Convolutional Neural Networks (CNNs) have transformed the field of computer vision. While visualization tools have helped explore and explain CNNs, their inner workings remain opaque, particularly how they detect specific features. As deep learning applications become more widespread across various fields, it becomes crucial to understand these models. This understanding is needed to avoid misinterpretation and bias, which can seriously affect society. This research motivates holistic visualization approaches, which show various aspects of CNNs. Existing visualizations often focus on a few aspects, answering specific questions. Combining them in comprehensive software could provide a more holistic view of CNNs and their inner processes. While 2D space cannot present all relevant information due to screen size restrictions, 3D environments offer new representation and interaction opportunities. Therefore, we enable the visualization of large CNNs in a virtual 3D space. This work further contributes to the visualization field by improving the activation maximization method for feature visualization, which previously struggled with local maxima. In addition to visualization, this research increases CNN transparency through systematic simplification. We use pre-defined convolution filters from traditional image processing in modern CNN architectures. Instead of changing the filters during training, the training process finds linear combinations of the pre-defined filter outputs. Our Pre-defined Filter Convolutional Neural Networks (PFCNNs) with nine distinct edge and line detectors generalize better than standard CNNs, especially on smaller datasets. For ResNet18, we observed increased test accuracies ranging from 5-11 percentage points with the same number of trainable parameters across the Fine-Grained Visual Classification of Aircraft, StanfordCars, Caltech-UCSD Birds-200-2011, and the 102 Category Flower dataset. The results imply that many image recognition problems do not require training the convolution kernels. For practical use, PFCNNs can even save trainable weights.Item Stufenbasierte Automation zur Unterstützung der Führungsprozesse von Einsatzorganisationen am Beispiel der präklinischen Notfallrettung(2025) Berndt, Henrik MatthiasDer Rettungsdienst steht mitten in einer digitalen Transformation. Diese manifestiert sich derzeit vor allem in der zunehmenden Verwendung mobiler, Tablet-basierter Anwendungssysteme für die digitale Einsatzdokumentation, wird jedoch in der Zukunft weitreichender sein. Ein besonderes Szenario ist der Massenanfall von Verletzten, bei dem aufgrund eines anfänglichen Mangels an Behandlungskapazitäten eine Abkehr von Routineprozeduren notwendig ist. Insbesondere die Seltenheit, Komplexität und Dynamik machen Massenanfälle größerer Dimension zu einer Herausforderung vor allem für die Leitungskräfte. Diese müssen die Situation gut erfassen und verstehen, Organisationsstrukturen ad-hoc installieren und die vor Ort befindlichen und eintreffenden Rettungskräfte effektiv, effizient und sicherheitsorientiert einsetzen. In dieser Arbeit wird untersucht, ob und wie Leitungskräfte in einem digitalisierten Rettungsdienst bei Massenanfällen mithilfe von Automation unterstützt werden können. Anhand einer gründlichen Analyse und mit Blick auf den aktuellen Stand der Technik werden bestehende Probleme in Hinblick auf Effektivität und Effizienz identifiziert. Auf dieser Basis wird ein prototypisches Anwendungssystem für die Einsatzleitung konzipiert und implementiert, das die festgestellten Herausforderungen mit Automation zu lösen versucht. Im Rahmen der Entwicklung werden wissenschaftliche Konstrukte wie „Situation Awareness“ und „Gebrauchstauglichkeit“ betrachtet und eingeordnet. In Bezug auf die Automation werden bestehende Stufenmodelle untersucht und zusammengeführt. Die vorliegende Forschungsarbeit kommt zu der Erkenntnis, dass günstige Automationsstufen in Massenanfällen nicht allgemeingültig definiert werden sollten, sondern vielmehr von den Aufgaben und der Situation abhängen. Mit dem Ziel einer verständlichen Automationsfunktionalität wird ein Modell mit vier Automationsstufen entwickelt und implementiert, das neben manueller Kontrolle, zwei Stufen der Teilautomation und eine Vollautomation mit Information an den Benutzer beinhaltet. In einer summativen Evaluation mit Leitungskräften des Rettungsdienstes werden das System und insbesondere die Automationsfunktionen in Bezug auf die Gebrauchstauglichkeit, Nützlichkeit sowie die Situation Awareness der Benutzer untersucht.Item Security analysis of confidential VMs on modern server architectures(2025-06-24) Wilke, LucaCloud computing has transformed data management and IT practices for organizations and individuals alike, offering unmatched scalability, flexibility, and cost-efficiency. However, it comes with privacy concerns, as the cloud service providers can access all processed data. Trusted Execution Environments (TEEs) are one potential solution, offering a new form of isolation that even locks out the infrastructure operator. Attacks from any software component outside the TEE are thwarted by novel access restrictions while physical attacks are prevented by memory encryption. Even the operating system or hypervisor cannot overcome these restrictions. With Intel SGX, Intel TDX, and AMD SEV-SNP, both major x86 CPU vendors offer TEEs on their server CPUs. This thesis scrutinizes the extent to which the current TEE generation delivers on their security promises. We start this thesis by describing the isolation mechanisms implemented by SGX, TDX, and SEV-SNP. Building on these insights, we demonstrate that the trend to use deterministic memory encryption without integrity or freshness has several shortcomings. We show that monitoring deterministic ciphertexts for changes allows leaking information about the plaintext, which we exploit on SEV-SNP. SGX and TDX prevent straightforward exploitation by restricting software attackers from reading and writing the ciphertext, while SEV-SNP only restricts writing. Next, we challenge the security of such access restrictions by showing that an attacker with brief physical access to the memory modules can create aliases in the address space that bypass these safeguards. We exploit this on SEV-SNP to re-enable write access for software attackers, culminating in a devastating attack that forges attestation reports, undermining all trust in SEV-SNP. On SGX and TDX, such attacks are mitigated by a dedicated alias check at boot time. Finally, we examine the security of VM-based TEEs against single-stepping attacks, which allow instruction-granular tracing and have led to numerous high-stakes attacks on SGX. We show that SEV-SNP is also vulnerable to single-stepping and provide a software framework enabling easy access to single-stepping on SEV for future research. Next, we analyze the single-stepping security of Intel TDX, which comes with a built-in mitigation comprising a detection heuristic and a prevention mode. We uncover a flaw in the heuristic that stops the activation of the prevention mode, thereby re-enabling single-stepping on TDX. Furthermore, we unveil an inherent flaw in the prevention mode that leaks fine-grained information about the control flow.Item Integrated methodology for enhanced low-dose PET imaging(2025) Elmoujarkach, Ezzat A.Item Novel Machine Learning Methods for Video Understanding and Medical Analysis(2025-06-26) Hu, YaxinArtificial intelligence has developed rapidly over the past decade and has penetrated into nearly every aspect of life. New applications in areas such as human-computer interaction, virtual reality, autonomous driving and intelligent medical systems have emerged in large numbers. Video is a kind of high-dimensional data, which has one more dimension than images, requiring more computing resources. As more and more high-quality large-scale video datasets are released, video understanding has become a cutting-edge research direction in the computer vision community. Action recognition is one of the most important tasks in video understanding. There are many successful network architectures for video action recognition. In our work, we focus on proposing new designs and architectures for video understanding and investigating their applications in medicine. We introduce a novel RGBt sampling strategy to fuse temporal information into single frames without increasing the computational load and explore different color sampling strategies to further improve network performance. We find that frames with temporal information obtained by fusing the green channels from different frames achieve the best results. We use tubes of different sizes to embed richer temporal information into tokens without increasing the computational load. We also introduce a novel bio-inspired neuron model, the MinBlock, to make the network more information selective. Furthermore, we propose a spatiotemporal architecture that slices videos in space-time and thus enables 2D-CNNs to directly extract temporal information. All the above methods are evaluated on at least two benchmark datasets and all perform better than the baselines. We also focus on applying our networks in medicine. We use our slicing 2D-CNN architecture for glaucoma and visual impairments analysis. And we find that visual impairments may affect walking patterns of humans thus making the video analysis relevant for diagnosis. We also design a machine learning model to diagnose psychosis and show that it is possible to predict whether clinical high-risk patients would actually develop a psychosis.Item Non-invasive estimation of respiratory effort(2025) Graßhoff, JanItem Cutting-edge precision(2025) Erben, NiclasItem Survival and parasite spread in a spatial host-parasite model with host immunity(2025) Franck, Sascha JosefWe introduce a stochastic model for the invasion of a parasite population in a spatially structured host population, which includes an individual-based adaptive immune response. We will call this the "Spatial Infection Model with Host Immunity" or SIMI for short. In the SIMI, parasites move as independent simple random walks on a graph until they meet a vertex that is inhabited by a host. With a given probability, the host repels the infection, kills the parasite, and adapts its probability to repel the next infection. After a successful infection attempt, both the host and the attacking parasite die, and only the parasite leaves a random number of offspring. We study the SIMI on the integer line and show that parasites have a positive survival probability if and only if the mean offspring are greater than the mean number of needed infection attempts. Furthermore, we study the speed at which the parasites invade the host population. If the probability of a host after repelling an infection, to also repel the next one does not grow fast enough, then parasites propagate across the host population at a linear speed. However, if that probability grows quickly enough, the propagation speed is polynomial with exponent less than 1. Finally, we investigate the SIMI on higher-dimensional graphs with hosts that are either totally immune and never get infected or get infected in the first attempt. We show that the survival probability undergoes a phase transition in the frequency of totally immune hosts.Item Invariant integration for prior-knowledge enhanced deep learning architectures(2025) Rath, MatthiasIncorporating prior knowledge to Deep Neural Networks is a promising approach to improve their sample efficiency by effectively limiting the search space the learning algorithm needs to cover. This reduces the amount of samples a network needs to be trained on to reach a specific performance. Geometrical prior knowledge is the knowledge about input transformations that affect the output in a predictable way, or not at all. It can be built into Deep Neural Networks in a mathematically sound manner by enforcing in- or equivariance. Equivariance is the property of a map to behave predictably under input transformations. Convolutions are an example for a translation-equivariant map, where a translation of the input results in a shifted output. Group-equivariant convolutions are a generalization achieving equivariance towards more general transformation groups such as rotations or flips. Using group-equivariant convolutions within Neural Networks embeds the desired equivariance in addition to translations. Invariance is a closely related concept, where the output of a function does not change when its input is transformed. Invariance is often a desirable property of a feature extractor in the context of classification. While the extracted features need to encode the information required to discriminate between different classes, they should be invariant to intra-class variations, i.e., to transformations that map samples within the same class subspace. In the context of Deep Neural Networks, the required invariant representations can be obtained with mathematical guarantees by applying group-equivariant convolutions followed by globally pooling among the group- and spatial domain. While pooling guarantees invariance, it also discards information and is thus not ideal. In this dissertation, we investigate the transition from equi- to invariance within Deep Neural Networks that leverage geometrical prior knowledge. Therefore, we replace the spatial pooling operation with Invariant Integration, a method that guarantees invariance while adding targeted model capacity rather than destroying information. We first propose an Invariant Integration Layer for rotations based on the group average calculated with monomials. The layer can be readily used within a Neural Network and allows backpropagating through it. The monomial parameters are selected either by iteratively optimizing the least-squared-error of a linear classifier, or based on neural network pruning methods. We then replace the monomials with functions that are more often encountered in the context of Neural Networks such as learnable weighted sums or self-attention. We thereby streamline the training procedure of Neural Networks enhanced with Invariant Integration. Finally, we expand Invariant Integration towards flips and scales, highlighting the universality of our approach. We further propose a multi-stream architecture that is able to leverage invariance to multiple transformations at once. This approach allows us to efficiently combine multiple invariances and select the best-fit invariant solution for the specific problem to solve. The conducted experiments show that applying Invariant Integration in combination with group-equivariant convolutions significantly boosts the sample efficiency of Deep Neural Networks improving the performance when the amount of available training data is limited.Item MPC-based vehicle trajectory tracking using machine learning for parameter optimization and fault detection(2025) Lubiniecki, ToniThis thesis explores advancements in trajectory tracking control and fault detection within automated vehicle systems, focusing on two main areas: developing a learning-based model predictive control algorithm to enhance tracking accuracy and evaluating various neural networks as fault detection systems for trajectory tracking controllers. Both parts are assessed in a high-fidelity simulation environment. The first part presents two adaptive model predictive controllers that use vehicle information, trajectory data, and tracking information to adapt the vehicle model within the model predictive control system, compensating for lost tracking accuracy due to model mismatches. One approach employs a trajectorydynamic lookup table, while the more advanced approach uses Gaussian process regression with clustering. A thorough simulation study on real-world racetracks with varying dynamics demonstrates that the advanced approach effectively manages condition changes, significantly improves tracking performance, handles unknown trajectories with similar improvements, and memorizes adapted behavior through clustering. The second part evaluates the effectiveness of four types of neural networks as fault detection systems. These networks detect changes in the vehicle, environmental shifts, or discrepancies between the applied vehicle model and the real vehicle. Trained a priori through supervised learning, the networks use tracking information, controller outputs, and vehicle data. The evaluation distinguishes between known and unknown fault conditions. The results suggest that neural networks are generally suitable for fault detection systems. Differences in effectiveness among the network types are minor for known fault conditions but more significant for unknown conditions. Integrating adaptive model predictive control and neural network-based fault detection systems shows promise for developing robust and fault-tolerant control systems, enhancing accuracy and maintaining operational integrity in dynamic environments for trajectory tracking.Item A Fourier-analytical approach for field-free-point magnetic particle imaging(2025) Maaß, MarcoMagnetic particle imaging is a tracer-based medical imaging technique that measures the spatial distribution of superparamagnetic nanoparticles. Alternating magnetic fields with different excitation sequences are used to measure the nanoparticle distribution in a scanner. Usually, the simplified Langevin model of paramagnetism is used as a first approximation for the complicated nonlinear magnetization behavior of nanoparticles. Although the modified Langevin model of paramagnetism can provide suitable image reconstructions for one-dimensional excitation, the situation is more complicated for higher-dimensional excitation, as several aspects cannot be fully explained by the Langevin model. A well-known example is the spatial similarity of the frequency components of the system function with tensor products of Chebyshev polynomials. This was observed for a higher-dimensional excitation of the Lissajous trajectory type and was unproven for almost ten years. With the aim of explaining such observations mathematically, this thesis makes an important contribution to the mathematical foundations of magnetic particle imaging. To this end, the spatio-temporal system function based on the Langevin model is transformed into the frequency domain using various concepts of Fourier analysis. The scientific contribution of the newly developed mathematical framework is manifold. Firstly, the developed model is able to separate the scanner-dependent excitation from the particle magnetization model, allowing better utilization of the imaging operator so that faster reconstruction methods could be developed. Secondly, it is now easier to investigate both the effect of the magnetization model and that of the excitation sequence in the imaging model separately. Thus, an extended equilibrium magnetization model is introduced in this thesis and a series representation is developed for it. Furthermore, the exact relationship between the frequency components of the system function and the tensor products of Chebyshev polynomials is shown for excitations of the Lissajous trajectory type. Finally, using the developed mathematical framework, the frequency representations of various excitation sequences known from the literature are calculated, which further increases the applicability of the model for magnetic particle imaging.Item The role of psychological basic need satisfaction in seafarers’ interaction with energy-efficiency decision support systems and preferences for automation types(2025-04-25) Zoubir, MouradThis dissertation investigates Basic Psychological Need satisfaction and Preferences for Automation Types in maritime energy-efficient operations, by focusing on seafarers’ interactions with decision-support systems (DSS) for energy-efficient route planning. Given the need to reduce CO₂ emissions in the shipping industry, operational measures like energy-efficient route planning are essential. However, high workloads, safety demands, and conflicting stakeholder goals challenge effective implementation. DSS can potentially support seafarers in overcoming these barriers, but previous research highlights obstacles to adoption, particularly mismatches between technical systems and onboard realities or scepticism towards automation. This dissertation addresses these challenges from an engineering psychology perspective by systematically (1) describing route planning tasks and decision-making, (2) applying Basic Psychological Needs theory to analyse seafarers’ satisfaction of needs both at work and in technology usage, and (3) developing a scale to assess preferences for automation types. The dissertation comprises five publications, each contributing multiple empirical insights. The synopsis accompanying these articles gives a comprehensive background on energy efficiency in the maritime industry, task analysis, Basic Psychological Needs and human-automation interaction, before discussing implications of the research. Article 1 provides an introduction to the research landscape, presenting a systematic literature review on human factors related to onboard energy efficiency. Although not a core dissertation contribution, the review shows prior research focused mainly on stakeholder perspectives, with limited attention to seafarers and specific system properties supporting onboard operations. Article 2 builds on this foundation with a hierarchical task analysis of energy-efficient route planning, informed by guidelines and expert input (N = 3). An online study (N = 65) used this analysis to have seafarers rate tasks on subjective value, success expectancy, and cost, identifying tasks like tidal and weather routing as high-value but costly or of lower success expectancy. The study also assessed Basic Psychological Need at work satisfaction, revealing lower autonomy satisfaction than competence or relatedness, and preferences for automated Information Acquisition and Analysis but human decision selection. Post hoc analysis of interviews conducted in a simulator study (N = 22) for Article 3 further used the Critical Decision Method to explore seafarers’ decision-making in route planning, highlighting safety, regulatory adherence, practical experience, and transparency as priorities. The detailed task analysis supported the external validity of the experimental studies, guiding autonomy-supportive DSS design and a differentiated analysis of autonomy facets to explore the autonomy-automation preference relationship. Article 3 presents an experimental study using a high-fidelity ship-bridge simulator, where seafarers (N = 22) evaluated usability, user experience, and Basic Psychological Need in technology usage satisfaction with a route planning DSS versus a digital charting tool. The DSS performed similarly or better across most metrics, though autonomy satisfaction was lower. Thematic analysis of post-task interviews emphasised transparency and flexibility as crucial for user autonomy, steering the dissertation toward autonomy-supportive DSS feature development. Article 4 builds on these insights through a simulator study with experienced seafarers (N = 18) and an international online study (N = 48). Comparing a charting tool, a “standard” DSS, and a DSS with route adjustability (an autonomy-support feature), results showed that while most metrics improved between the charting tool and the standard DSS, only the DSS with route adjustability significantly enhanced autonomy in technology usage satisfaction and trust. Replication of the correlation between autonomy at work and decision selection preferences from Article 2 were not confirmed; however, lower autonomy satisfaction at work was confirmed. Thematic analysis of simulator study interviews further differentiated facets of autonomy in technology use, using the Dimensions of Autonomy in Human-Algorithm Interaction model, which suggested algorithm comprehensiveness, usability, user empowerment, and collaborative workflows could potential be leveraged to enhance autonomy. This article demonstrates how human-centred design can identify and address Basic Psychological Need frustrations in technology use. Article 5 details the development and validation of the Preference for Automation Types Scale (PATS), used in Articles 2 through 5. Based on the Model of Types and Levels of Automation, PATS differentiates preferences for automation types. Validation studies across three samples, including seafarers, students using generative AI for essay writing (N = 107) or a DSS for vacation planning (N = 126), demonstrated the PATS’ dimensionality, reliability, and construct validity. The scale effectively assessed preferences as a human vs. automation dichotomy while distinguishing specific automation types across contexts, making it a valuable tool for aligning a system’s automation with users’ preferences. The General Discussion integrates findings from all studies, addressing theoretical implications for engineering psychology and human factors research. It underscores the need for autonomy-supportive technology, especially where autonomy needs at work are frustrated, and highlights that traditional user experience and usability measures are insufficient for evaluating complex automation systems. The PATS’ broader implications are explored as a preference measure, which could assess user inclinations for or against specific automation features even where overall trust is adequate, enabling cross-context comparisons in areas with varied automation demands, such as transportation and healthcare. Practical recommendations for the maritime industry include DSS design principles like transparency (e.g. clear communication of algorithmic decisions) and adaptability through adjustable automation. Additionally, international maritime policies should promote human-centred design by standardising usability testing and establishing transparency standards. In conclusion, this dissertation contributes to engineering psychology research on Basic Psychological Needs in technology usage and human-automation interaction. It provides a comprehensive framework for human-centred DSS design, offering insights applicable to other safety-critical domains and supporting the broader goal of mitigating climate change through enhanced energy-efficient operations in the shipping industry.Item Advanced sensor fusion methods with applications to localization and navigation(2025-03-18) Fetzer, ToniWe use sensors to track how many steps we take during the day or how well we sleep. Sensor fusion methods are used to draw these conclusions. A particularly difficult application is indoor localization, i.e. finding a person’s position within a building. This is mainly due to the many degrees of freedom of human movement and the physical properties of sensors inside buildings. Suitable approaches for sensor fusion for the purpose of self-localization using a smartphone are the subject of this thesis. To best address the complexity of this problem, a non-linear and non-Gaussian distributed state space must be assumed. For the required position estimation, we therefore focus on the class of particle filters and build a novel generic filter framework on top of it. The special feature of this framework is the modular approach and the low requirements towards the sensor and movement models. In this work, we investigate models for Wi-Fi and Bluetooth RSSI measurements using radio propagation models, the relatively new standard Wi-Fi FTM, which is explicitly designed for localization purposes, the barometer to determine floor changes as accurately as possible, and activity recognition to find out what the pedestrian is doing, e.g., ascending stairs. The human motion is then modeled in a movement model using IMU data. Here we propose two approaches: a regular tessellated grid graph and an irregular tessellated navigation mesh. From these we formulate our proposal for an indoor localization system (ILS). However, some fundamental problems of the particle filter lead to critical errors. These can be a multi- modal density to be estimated, unbalanced sensor models or the so-called sample impoverish- ment. Compensation, or in the best case elimination, of these errors by advanced sensor fusion methods is the main contribution of this thesis. The most important approach in this context is our adaptation of an interacting multiple modal particle filter (IMMPF) to the requirements of indoor localization. This results in a completely new approach to the formulation of an ILS. Using quality metrics, it is possible to dynamically switch between arbitrarily formulated par- ticle filters running in parallel. Furthermore, we explicitly propose several approaches from the field of particle distribution optimization (PDO) to avoid the sample impoverishment problem. In particular, the support filter approach (SFA), which is also based on the IMMPF principle, leads to excellent position estimates even under the most difficult conditions, as extensive ex- periments show.Item Integrating humans and artificial intelligence in diagnostic tasks(2025) Schrills, Tim Philipp PeterThis dissertation investigates the integration of humans and artificial intelligence (AI) in diagnostic tasks, focusing on user experience and interaction in explainable AI (XAI) systems. Central to this research is the development of the Subjective Information Processing Awareness (SIPA) concept, which deal with user experience in automated information processing. The work addresses the increasing reliance on AI for automating information processing in critical domains such as healthcare, where transparency and human oversight may be enabled through explainable systems. Drawing on theories of human-automation interaction, this research develops and validates a model of integrated human-AI information processing. Four empirical studies explore automation-related user experience in different contexts: digital contact tracing, automated insulin delivery, AI-supported pattern recognition, and AI-based diagnosis. The findings highlight the psychological impacts of AI explanations on trust, situation awareness, and decision-making. Based on empirical findings, this dissertation discusses the concept of diagnosticity as a central metric for successful human-AI integration and proposes a framework for designing XAI systems that enhance user experience by aligning with human information processing. The dissertation concludes with practical guidelines for developing human-centered AI systems, emphasizing the importance of SIPA, user awareness, system transparency, and maintaining human control in automated diagnostic processes.Item Lightweight, transparent, and uncertainty-aware deep learning for diabetic retinopathy grading(2025) Siebert, Marlin SebastianItem Enabling research data management for non IT-professionals(2025-02-16) Schiff, SimonIn almost all academic fields, results are derived from found evidence such as objects to be digitized, case studies, observations, experiments, or research data. Ideally, results are linked to its evidence to ease data governance and reproducibility of results, and publicly stored in a research data repository to be themselves linked as evidence for new results. This linking has created a huge mesh of data over the years. Searching for information, deciding whether found information is relevant, and then using relevant information for producing results costs a lot of time in such a mesh of data. Due to the fact that a high investment of time is associated with high costs, funding agencies such as the German Research Foundation (Deutsche Forschungsgemeinschaft; DFG) or the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung; BMBF) demand a data management plan (DMP). A DMP is designed to reduce the costs of projects submitted to a funding agency and to avoid future costs when data repositories are to be reused. Nevertheless, a DMP is often not fully implemented because it is too costly, which in the long run leads to a mesh of data. In this thesis, we identify problems and present solutions usable by non-IT-experts to spend less time on solving problems that arise at implementing a DMP at each project’s repository and coping with a huge mesh of data across many repositories. According to our observations, humanities scholars produce research data that are meant to be printed later or uploaded at a repository. Potential problems that arise at a repository, independent of other repositories, to be solved are manifold. Data to be printed is encoded with a markup language for illustration purposes and not machine interpretable formatted. We not only show that such formatted data can be structured with a parser to be interpreted by machines, but also what possibilities open up from the structured data. Structured data is automatically combined, linked, transformed into other formats, and visualized on the web. Visualized data can be cited and annotated to help others assess relevance. Once, problems are solved at each repository, we show how we cope with data linked across repositories. This is achieved by designing a human-aware information retrieval (IR) agent, that can search in a mesh of data for relevant information. We discuss in what way the interaction of a user with such an IR agent can be optimized with human-aware collaborative planning strategies.Item Weak convergence of the Milstein scheme for semi-linear parabolic stochastic evolution equations(2025) Kastner, FelixThe numerical analysis of the Milstein scheme for stochastic ordinary differential equations (SDEs) is relatively well understood. It converges with both strong and weak order one. However, much less is known about the Milstein scheme and its variants when applied to stochastic partial differential equations or more general stochastic evolution equations. This thesis focuses on the weak convergence of the Milstein scheme in the latter setting. We prove that, similar to the SDE case, it also achieves an order of almost one — specifically, an order of 1 − ε for all ε > 0. More concretely, we work in the semigroup framework introduced by Da Prato and Zabczyk and examine the approximation of mild solutions of equations of semi-linear parabolic type. In addition, we allow the drift coefficient of the evolution equation to take values in certain distribution spaces associated to the dominating linear operator. In that case, the order of convergence depends on the regularity of the coefficients and tends to zero as the regularity decreases. The proof employs elements of the mild stochastic calculus recently introduced by Da Prato, Jentzen and Röckner (Trans. Amer. Math. Soc., 372(6), 2019) and crucially depends on recent results on the regularity of solutions to the associated infinite-dimensional Kolmogorov backward equation by Andersson, Hefter, Jentzen and Kurniawan (Potential Anal., 50(3), 2019). It is based on work by Jentzen and Kurniawan investigating Euler-type schemes (Found. Comput. Math., 21(2), 2021).