Informatik/Technik

Dauerhafte URI für die Sektionhttps://epub.uni-luebeck.de/handle/zhb_hl/4

Listen

Neueste Veröffentlichungen

Gerade angezeigt 1 - 20 von 344
  • Item
    Novel Machine Learning Methods for Video Understanding and Medical Analysis
    (2025-06-26) Hu, Yaxin
    Artificial intelligence has developed rapidly over the past decade and has penetrated into nearly every aspect of life. New applications in areas such as human-computer interaction, virtual reality, autonomous driving and intelligent medical systems have emerged in large numbers. Video is a kind of high-dimensional data, which has one more dimension than images, requiring more computing resources. As more and more high-quality large-scale video datasets are released, video understanding has become a cutting-edge research direction in the computer vision community. Action recognition is one of the most important tasks in video understanding. There are many successful network architectures for video action recognition. In our work, we focus on proposing new designs and architectures for video understanding and investigating their applications in medicine. We introduce a novel RGBt sampling strategy to fuse temporal information into single frames without increasing the computational load and explore different color sampling strategies to further improve network performance. We find that frames with temporal information obtained by fusing the green channels from different frames achieve the best results. We use tubes of different sizes to embed richer temporal information into tokens without increasing the computational load. We also introduce a novel bio-inspired neuron model, the MinBlock, to make the network more information selective. Furthermore, we propose a spatiotemporal architecture that slices videos in space-time and thus enables 2D-CNNs to directly extract temporal information. All the above methods are evaluated on at least two benchmark datasets and all perform better than the baselines. We also focus on applying our networks in medicine. We use our slicing 2D-CNN architecture for glaucoma and visual impairments analysis. And we find that visual impairments may affect walking patterns of humans thus making the video analysis relevant for diagnosis. We also design a machine learning model to diagnose psychosis and show that it is possible to predict whether clinical high-risk patients would actually develop a psychosis.
  • Item
    Non-invasive estimation of respiratory effort
    (2025) Graßhoff, Jan
  • Item
    Cutting-edge precision
    (2025) Erben, Niclas
  • Item
    Survival and parasite spread in a spatial host-parasite model with host immunity
    (2025) Franck, Sascha Josef
    We introduce a stochastic model for the invasion of a parasite population in a spatially structured host population, which includes an individual-based adaptive immune response. We will call this the "Spatial Infection Model with Host Immunity" or SIMI for short. In the SIMI, parasites move as independent simple random walks on a graph until they meet a vertex that is inhabited by a host. With a given probability, the host repels the infection, kills the parasite, and adapts its probability to repel the next infection. After a successful infection attempt, both the host and the attacking parasite die, and only the parasite leaves a random number of offspring. We study the SIMI on the integer line and show that parasites have a positive survival probability if and only if the mean offspring are greater than the mean number of needed infection attempts. Furthermore, we study the speed at which the parasites invade the host population. If the probability of a host after repelling an infection, to also repel the next one does not grow fast enough, then parasites propagate across the host population at a linear speed. However, if that probability grows quickly enough, the propagation speed is polynomial with exponent less than 1. Finally, we investigate the SIMI on higher-dimensional graphs with hosts that are either totally immune and never get infected or get infected in the first attempt. We show that the survival probability undergoes a phase transition in the frequency of totally immune hosts.
  • Item
    Invariant integration for prior-knowledge enhanced deep learning architectures
    (2025) Rath, Matthias
    Incorporating prior knowledge to Deep Neural Networks is a promising approach to improve their sample efficiency by effectively limiting the search space the learning algorithm needs to cover. This reduces the amount of samples a network needs to be trained on to reach a specific performance. Geometrical prior knowledge is the knowledge about input transformations that affect the output in a predictable way, or not at all. It can be built into Deep Neural Networks in a mathematically sound manner by enforcing in- or equivariance. Equivariance is the property of a map to behave predictably under input transformations. Convolutions are an example for a translation-equivariant map, where a translation of the input results in a shifted output. Group-equivariant convolutions are a generalization achieving equivariance towards more general transformation groups such as rotations or flips. Using group-equivariant convolutions within Neural Networks embeds the desired equivariance in addition to translations. Invariance is a closely related concept, where the output of a function does not change when its input is transformed. Invariance is often a desirable property of a feature extractor in the context of classification. While the extracted features need to encode the information required to discriminate between different classes, they should be invariant to intra-class variations, i.e., to transformations that map samples within the same class subspace. In the context of Deep Neural Networks, the required invariant representations can be obtained with mathematical guarantees by applying group-equivariant convolutions followed by globally pooling among the group- and spatial domain. While pooling guarantees invariance, it also discards information and is thus not ideal. In this dissertation, we investigate the transition from equi- to invariance within Deep Neural Networks that leverage geometrical prior knowledge. Therefore, we replace the spatial pooling operation with Invariant Integration, a method that guarantees invariance while adding targeted model capacity rather than destroying information. We first propose an Invariant Integration Layer for rotations based on the group average calculated with monomials. The layer can be readily used within a Neural Network and allows backpropagating through it. The monomial parameters are selected either by iteratively optimizing the least-squared-error of a linear classifier, or based on neural network pruning methods. We then replace the monomials with functions that are more often encountered in the context of Neural Networks such as learnable weighted sums or self-attention. We thereby streamline the training procedure of Neural Networks enhanced with Invariant Integration. Finally, we expand Invariant Integration towards flips and scales, highlighting the universality of our approach. We further propose a multi-stream architecture that is able to leverage invariance to multiple transformations at once. This approach allows us to efficiently combine multiple invariances and select the best-fit invariant solution for the specific problem to solve. The conducted experiments show that applying Invariant Integration in combination with group-equivariant convolutions significantly boosts the sample efficiency of Deep Neural Networks improving the performance when the amount of available training data is limited.
  • Item
    MPC-based vehicle trajectory tracking using machine learning for parameter optimization and fault detection
    (2025) Lubiniecki, Toni
    This thesis explores advancements in trajectory tracking control and fault detection within automated vehicle systems, focusing on two main areas: developing a learning-based model predictive control algorithm to enhance tracking accuracy and evaluating various neural networks as fault detection systems for trajectory tracking controllers. Both parts are assessed in a high-fidelity simulation environment. The first part presents two adaptive model predictive controllers that use vehicle information, trajectory data, and tracking information to adapt the vehicle model within the model predictive control system, compensating for lost tracking accuracy due to model mismatches. One approach employs a trajectorydynamic lookup table, while the more advanced approach uses Gaussian process regression with clustering. A thorough simulation study on real-world racetracks with varying dynamics demonstrates that the advanced approach effectively manages condition changes, significantly improves tracking performance, handles unknown trajectories with similar improvements, and memorizes adapted behavior through clustering. The second part evaluates the effectiveness of four types of neural networks as fault detection systems. These networks detect changes in the vehicle, environmental shifts, or discrepancies between the applied vehicle model and the real vehicle. Trained a priori through supervised learning, the networks use tracking information, controller outputs, and vehicle data. The evaluation distinguishes between known and unknown fault conditions. The results suggest that neural networks are generally suitable for fault detection systems. Differences in effectiveness among the network types are minor for known fault conditions but more significant for unknown conditions. Integrating adaptive model predictive control and neural network-based fault detection systems shows promise for developing robust and fault-tolerant control systems, enhancing accuracy and maintaining operational integrity in dynamic environments for trajectory tracking.
  • Item
    A Fourier-analytical approach for field-free-point magnetic particle imaging
    (2025) Maaß, Marco
    Magnetic particle imaging is a tracer-based medical imaging technique that measures the spatial distribution of superparamagnetic nanoparticles. Alternating magnetic fields with different excitation sequences are used to measure the nanoparticle distribution in a scanner. Usually, the simplified Langevin model of paramagnetism is used as a first approximation for the complicated nonlinear magnetization behavior of nanoparticles. Although the modified Langevin model of paramagnetism can provide suitable image reconstructions for one-dimensional excitation, the situation is more complicated for higher-dimensional excitation, as several aspects cannot be fully explained by the Langevin model. A well-known example is the spatial similarity of the frequency components of the system function with tensor products of Chebyshev polynomials. This was observed for a higher-dimensional excitation of the Lissajous trajectory type and was unproven for almost ten years. With the aim of explaining such observations mathematically, this thesis makes an important contribution to the mathematical foundations of magnetic particle imaging. To this end, the spatio-temporal system function based on the Langevin model is transformed into the frequency domain using various concepts of Fourier analysis. The scientific contribution of the newly developed mathematical framework is manifold. Firstly, the developed model is able to separate the scanner-dependent excitation from the particle magnetization model, allowing better utilization of the imaging operator so that faster reconstruction methods could be developed. Secondly, it is now easier to investigate both the effect of the magnetization model and that of the excitation sequence in the imaging model separately. Thus, an extended equilibrium magnetization model is introduced in this thesis and a series representation is developed for it. Furthermore, the exact relationship between the frequency components of the system function and the tensor products of Chebyshev polynomials is shown for excitations of the Lissajous trajectory type. Finally, using the developed mathematical framework, the frequency representations of various excitation sequences known from the literature are calculated, which further increases the applicability of the model for magnetic particle imaging.
  • Item
    The role of psychological basic need satisfaction in seafarers’ interaction with energy-efficiency decision support systems and preferences for automation types
    (2025-04-25) Zoubir, Mourad
    This dissertation investigates Basic Psychological Need satisfaction and Preferences for Automation Types in maritime energy-efficient operations, by focusing on seafarers’ interactions with decision-support systems (DSS) for energy-efficient route planning. Given the need to reduce CO₂ emissions in the shipping industry, operational measures like energy-efficient route planning are essential. However, high workloads, safety demands, and conflicting stakeholder goals challenge effective implementation. DSS can potentially support seafarers in overcoming these barriers, but previous research highlights obstacles to adoption, particularly mismatches between technical systems and onboard realities or scepticism towards automation. This dissertation addresses these challenges from an engineering psychology perspective by systematically (1) describing route planning tasks and decision-making, (2) applying Basic Psychological Needs theory to analyse seafarers’ satisfaction of needs both at work and in technology usage, and (3) developing a scale to assess preferences for automation types. The dissertation comprises five publications, each contributing multiple empirical insights. The synopsis accompanying these articles gives a comprehensive background on energy efficiency in the maritime industry, task analysis, Basic Psychological Needs and human-automation interaction, before discussing implications of the research. Article 1 provides an introduction to the research landscape, presenting a systematic literature review on human factors related to onboard energy efficiency. Although not a core dissertation contribution, the review shows prior research focused mainly on stakeholder perspectives, with limited attention to seafarers and specific system properties supporting onboard operations. Article 2 builds on this foundation with a hierarchical task analysis of energy-efficient route planning, informed by guidelines and expert input (N = 3). An online study (N = 65) used this analysis to have seafarers rate tasks on subjective value, success expectancy, and cost, identifying tasks like tidal and weather routing as high-value but costly or of lower success expectancy. The study also assessed Basic Psychological Need at work satisfaction, revealing lower autonomy satisfaction than competence or relatedness, and preferences for automated Information Acquisition and Analysis but human decision selection. Post hoc analysis of interviews conducted in a simulator study (N = 22) for Article 3 further used the Critical Decision Method to explore seafarers’ decision-making in route planning, highlighting safety, regulatory adherence, practical experience, and transparency as priorities. The detailed task analysis supported the external validity of the experimental studies, guiding autonomy-supportive DSS design and a differentiated analysis of autonomy facets to explore the autonomy-automation preference relationship. Article 3 presents an experimental study using a high-fidelity ship-bridge simulator, where seafarers (N = 22) evaluated usability, user experience, and Basic Psychological Need in technology usage satisfaction with a route planning DSS versus a digital charting tool. The DSS performed similarly or better across most metrics, though autonomy satisfaction was lower. Thematic analysis of post-task interviews emphasised transparency and flexibility as crucial for user autonomy, steering the dissertation toward autonomy-supportive DSS feature development. Article 4 builds on these insights through a simulator study with experienced seafarers (N = 18) and an international online study (N = 48). Comparing a charting tool, a “standard” DSS, and a DSS with route adjustability (an autonomy-support feature), results showed that while most metrics improved between the charting tool and the standard DSS, only the DSS with route adjustability significantly enhanced autonomy in technology usage satisfaction and trust. Replication of the correlation between autonomy at work and decision selection preferences from Article 2 were not confirmed; however, lower autonomy satisfaction at work was confirmed. Thematic analysis of simulator study interviews further differentiated facets of autonomy in technology use, using the Dimensions of Autonomy in Human-Algorithm Interaction model, which suggested algorithm comprehensiveness, usability, user empowerment, and collaborative workflows could potential be leveraged to enhance autonomy. This article demonstrates how human-centred design can identify and address Basic Psychological Need frustrations in technology use. Article 5 details the development and validation of the Preference for Automation Types Scale (PATS), used in Articles 2 through 5. Based on the Model of Types and Levels of Automation, PATS differentiates preferences for automation types. Validation studies across three samples, including seafarers, students using generative AI for essay writing (N = 107) or a DSS for vacation planning (N = 126), demonstrated the PATS’ dimensionality, reliability, and construct validity. The scale effectively assessed preferences as a human vs. automation dichotomy while distinguishing specific automation types across contexts, making it a valuable tool for aligning a system’s automation with users’ preferences. The General Discussion integrates findings from all studies, addressing theoretical implications for engineering psychology and human factors research. It underscores the need for autonomy-supportive technology, especially where autonomy needs at work are frustrated, and highlights that traditional user experience and usability measures are insufficient for evaluating complex automation systems. The PATS’ broader implications are explored as a preference measure, which could assess user inclinations for or against specific automation features even where overall trust is adequate, enabling cross-context comparisons in areas with varied automation demands, such as transportation and healthcare. Practical recommendations for the maritime industry include DSS design principles like transparency (e.g. clear communication of algorithmic decisions) and adaptability through adjustable automation. Additionally, international maritime policies should promote human-centred design by standardising usability testing and establishing transparency standards. In conclusion, this dissertation contributes to engineering psychology research on Basic Psychological Needs in technology usage and human-automation interaction. It provides a comprehensive framework for human-centred DSS design, offering insights applicable to other safety-critical domains and supporting the broader goal of mitigating climate change through enhanced energy-efficient operations in the shipping industry.
  • Item
    Advanced sensor fusion methods with applications to localization and navigation
    (2025-03-18) Fetzer, Toni
    We use sensors to track how many steps we take during the day or how well we sleep. Sensor fusion methods are used to draw these conclusions. A particularly difficult application is indoor localization, i.e. finding a person’s position within a building. This is mainly due to the many degrees of freedom of human movement and the physical properties of sensors inside buildings. Suitable approaches for sensor fusion for the purpose of self-localization using a smartphone are the subject of this thesis. To best address the complexity of this problem, a non-linear and non-Gaussian distributed state space must be assumed. For the required position estimation, we therefore focus on the class of particle filters and build a novel generic filter framework on top of it. The special feature of this framework is the modular approach and the low requirements towards the sensor and movement models. In this work, we investigate models for Wi-Fi and Bluetooth RSSI measurements using radio propagation models, the relatively new standard Wi-Fi FTM, which is explicitly designed for localization purposes, the barometer to determine floor changes as accurately as possible, and activity recognition to find out what the pedestrian is doing, e.g., ascending stairs. The human motion is then modeled in a movement model using IMU data. Here we propose two approaches: a regular tessellated grid graph and an irregular tessellated navigation mesh. From these we formulate our proposal for an indoor localization system (ILS). However, some fundamental problems of the particle filter lead to critical errors. These can be a multi- modal density to be estimated, unbalanced sensor models or the so-called sample impoverish- ment. Compensation, or in the best case elimination, of these errors by advanced sensor fusion methods is the main contribution of this thesis. The most important approach in this context is our adaptation of an interacting multiple modal particle filter (IMMPF) to the requirements of indoor localization. This results in a completely new approach to the formulation of an ILS. Using quality metrics, it is possible to dynamically switch between arbitrarily formulated par- ticle filters running in parallel. Furthermore, we explicitly propose several approaches from the field of particle distribution optimization (PDO) to avoid the sample impoverishment problem. In particular, the support filter approach (SFA), which is also based on the IMMPF principle, leads to excellent position estimates even under the most difficult conditions, as extensive ex- periments show.
  • Item
    Integrating humans and artificial intelligence in diagnostic tasks
    (2025) Schrills, Tim Philipp Peter
    This dissertation investigates the integration of humans and artificial intelligence (AI) in diagnostic tasks, focusing on user experience and interaction in explainable AI (XAI) systems. Central to this research is the development of the Subjective Information Processing Awareness (SIPA) concept, which deal with user experience in automated information processing. The work addresses the increasing reliance on AI for automating information processing in critical domains such as healthcare, where transparency and human oversight may be enabled through explainable systems. Drawing on theories of human-automation interaction, this research develops and validates a model of integrated human-AI information processing. Four empirical studies explore automation-related user experience in different contexts: digital contact tracing, automated insulin delivery, AI-supported pattern recognition, and AI-based diagnosis. The findings highlight the psychological impacts of AI explanations on trust, situation awareness, and decision-making. Based on empirical findings, this dissertation discusses the concept of diagnosticity as a central metric for successful human-AI integration and proposes a framework for designing XAI systems that enhance user experience by aligning with human information processing. The dissertation concludes with practical guidelines for developing human-centered AI systems, emphasizing the importance of SIPA, user awareness, system transparency, and maintaining human control in automated diagnostic processes.
  • Item
    Enabling research data management for non IT-professionals
    (2025-02-16) Schiff, Simon
    In almost all academic fields, results are derived from found evidence such as objects to be digitized, case studies, observations, experiments, or research data. Ideally, results are linked to its evidence to ease data governance and reproducibility of results, and publicly stored in a research data repository to be themselves linked as evidence for new results. This linking has created a huge mesh of data over the years. Searching for information, deciding whether found information is relevant, and then using relevant information for producing results costs a lot of time in such a mesh of data. Due to the fact that a high investment of time is associated with high costs, funding agencies such as the German Research Foundation (Deutsche Forschungsgemeinschaft; DFG) or the Federal Ministry of Education and Research (Bundesministerium für Bildung und Forschung; BMBF) demand a data management plan (DMP). A DMP is designed to reduce the costs of projects submitted to a funding agency and to avoid future costs when data repositories are to be reused. Nevertheless, a DMP is often not fully implemented because it is too costly, which in the long run leads to a mesh of data. In this thesis, we identify problems and present solutions usable by non-IT-experts to spend less time on solving problems that arise at implementing a DMP at each project’s repository and coping with a huge mesh of data across many repositories. According to our observations, humanities scholars produce research data that are meant to be printed later or uploaded at a repository. Potential problems that arise at a repository, independent of other repositories, to be solved are manifold. Data to be printed is encoded with a markup language for illustration purposes and not machine interpretable formatted. We not only show that such formatted data can be structured with a parser to be interpreted by machines, but also what possibilities open up from the structured data. Structured data is automatically combined, linked, transformed into other formats, and visualized on the web. Visualized data can be cited and annotated to help others assess relevance. Once, problems are solved at each repository, we show how we cope with data linked across repositories. This is achieved by designing a human-aware information retrieval (IR) agent, that can search in a mesh of data for relevant information. We discuss in what way the interaction of a user with such an IR agent can be optimized with human-aware collaborative planning strategies.
  • Item
    Weak convergence of the Milstein scheme for semi-linear parabolic stochastic evolution equations
    (2025) Kastner, Felix
    The numerical analysis of the Milstein scheme for stochastic ordinary differential equations (SDEs) is relatively well understood. It converges with both strong and weak order one. However, much less is known about the Milstein scheme and its variants when applied to stochastic partial differential equations or more general stochastic evolution equations. This thesis focuses on the weak convergence of the Milstein scheme in the latter setting. We prove that, similar to the SDE case, it also achieves an order of almost one — specifically, an order of 1 − ε for all ε > 0. More concretely, we work in the semigroup framework introduced by Da Prato and Zabczyk and examine the approximation of mild solutions of equations of semi-linear parabolic type. In addition, we allow the drift coefficient of the evolution equation to take values in certain distribution spaces associated to the dominating linear operator. In that case, the order of convergence depends on the regularity of the coefficients and tends to zero as the regularity decreases. The proof employs elements of the mild stochastic calculus recently introduced by Da Prato, Jentzen and Röckner (Trans. Amer. Math. Soc., 372(6), 2019) and crucially depends on recent results on the regularity of solutions to the associated infinite-dimensional Kolmogorov backward equation by Andersson, Hefter, Jentzen and Kurniawan (Potential Anal., 50(3), 2019). It is based on work by Jentzen and Kurniawan investigating Euler-type schemes (Found. Comput. Math., 21(2), 2021).
  • Item
    On Markov decision processes with the stochastic differential Bellman Equation
    (2025) Cakir, Merve Nur
    Stochastic differential equations play an important role in capturing the dynamics of complex systems, where uncertainty prevails in the form of noise. In complex systems noise is abundant, but its exact behaviour is unknown. However, noise can be simulated with stochastic processes. Stochastic calculi, such as the Itˆo formula, provide tools for navigating these systems. In this work, the adaptation of the Bellman equation, a cornerstone of dynamic programming, to the realm of stochastic differential equations is explored, facilitating the modeling of decision problems subject to noise. Value iteration and Q-learning, two well-known solution methods in machine learning, are extended to stochastic algorithms in order to approximate the solution for Markov decision processes with uncertainties modeled by the stochastic differential Bellman equation. These stochastic algorithms enable a realistic approach to modeling and solving decision problems in stochastic environments efficiently. The stochastic value iteration is applied when the environment is fully known, while the stochastic Q-learning extends its utility even in cases where transition probabilities remain unknown. Through theoretical analyses and case studies, these algorithms demonstrate their efficacy and applicability, delivering meaningful results. Additionally, the stochastic Q-learning achieves superior rewards compared to the deterministic algorithm, indicating its ability to optimize decision processes in stochastic environments more effectively by exploring more states. Finally, the stochastic differential Bellman equation is formulated as a system of ordinary equations, providing an alternative solution. For this, the concept of the random dynamical system is explored, of which a stochastic differential equation is an example.
  • Item
    Conceptual orthospaces
    (2024) Leemhuis, Mena
  • Item
    Advancing ultrasound image guidance
    (2024) Wulff, Daniel
  • Item
    E-nets as novel deep networks
    (2024) Grüning, Philipp
  • Item

KONTAKT

Universität zu Lübeck
Zentrale Hochschulbibliothek - Haus 60
Ratzeburger Allee 160
23562 Lübeck
Tel. +49 451 3101 2201
Fax +49 451 3101 2204


IMPRESSUM

DATENSCHUTZ

BARIEREFREIHEIT

Feedback schicken

Cookie-Einstellungen