Informatik/Technik
Dauerhafte URI für die Sektionhttps://epub.uni-luebeck.de/handle/zhb_hl/4
Listen
Auflistung Informatik/Technik nach Instituten/Kliniken "Institut für Neuro- und Bioinformatik"
Gerade angezeigt 1 - 20 von 28
- Treffer pro Seite
- Sortieroptionen
Item Algorithmen zur automatisierten Analyse von in vitro-Stammzellpopulationen(2013) Becker, TimItem Item Item Computational models and systems for gaze guidance(2010) Dorr, MichaelItem Computergestützte Analyse von biologischen und bioinspirierten Signalverarbeitungs- und Wahrnehmungsprozessen(2008) Madany Mamlouk, AmirItem E-nets as novel deep networks(2024) Grüning, PhilippItem Efficient bio-inspired sensing(2018) Burciu, IrinaItem Fast computation of genome distances(2021) Klötzl, FabianItem Feature-driven emergence of model graphs for object recognition and categorization(2006) Westphal, GünterItem Gaze guidance for augmented vision(2014) Pomârjanschi, LauraItem Gehirn im Ruhezustand - Signal oder Rauschen?(2018) Scheel, NormanItem Gesture-based interaction with time-of-flight cameras(2011) Haker, MartinItem Invariant integration for prior-knowledge enhanced deep learning architectures(2025) Rath, MatthiasIncorporating prior knowledge to Deep Neural Networks is a promising approach to improve their sample efficiency by effectively limiting the search space the learning algorithm needs to cover. This reduces the amount of samples a network needs to be trained on to reach a specific performance. Geometrical prior knowledge is the knowledge about input transformations that affect the output in a predictable way, or not at all. It can be built into Deep Neural Networks in a mathematically sound manner by enforcing in- or equivariance. Equivariance is the property of a map to behave predictably under input transformations. Convolutions are an example for a translation-equivariant map, where a translation of the input results in a shifted output. Group-equivariant convolutions are a generalization achieving equivariance towards more general transformation groups such as rotations or flips. Using group-equivariant convolutions within Neural Networks embeds the desired equivariance in addition to translations. Invariance is a closely related concept, where the output of a function does not change when its input is transformed. Invariance is often a desirable property of a feature extractor in the context of classification. While the extracted features need to encode the information required to discriminate between different classes, they should be invariant to intra-class variations, i.e., to transformations that map samples within the same class subspace. In the context of Deep Neural Networks, the required invariant representations can be obtained with mathematical guarantees by applying group-equivariant convolutions followed by globally pooling among the group- and spatial domain. While pooling guarantees invariance, it also discards information and is thus not ideal. In this dissertation, we investigate the transition from equi- to invariance within Deep Neural Networks that leverage geometrical prior knowledge. Therefore, we replace the spatial pooling operation with Invariant Integration, a method that guarantees invariance while adding targeted model capacity rather than destroying information. We first propose an Invariant Integration Layer for rotations based on the group average calculated with monomials. The layer can be readily used within a Neural Network and allows backpropagating through it. The monomial parameters are selected either by iteratively optimizing the least-squared-error of a linear classifier, or based on neural network pruning methods. We then replace the monomials with functions that are more often encountered in the context of Neural Networks such as learnable weighted sums or self-attention. We thereby streamline the training procedure of Neural Networks enhanced with Invariant Integration. Finally, we expand Invariant Integration towards flips and scales, highlighting the universality of our approach. We further propose a multi-stream architecture that is able to leverage invariance to multiple transformations at once. This approach allows us to efficiently combine multiple invariances and select the best-fit invariant solution for the specific problem to solve. The conducted experiments show that applying Invariant Integration in combination with group-equivariant convolutions significantly boosts the sample efficiency of Deep Neural Networks improving the performance when the amount of available training data is limited.Item Machine learning methods for genome-wide association data(2012) Brænne, IngridItem Machine vision for inspection and novelty detection(2012) Timm, FabianItem Methods for the prediction and guidance of human gaze(2012) Vig, EleonoraItem Neural mass models of the sleeping brain(2017) Schellenberger Costa, MichaelItem Novel Machine Learning Methods for Video Understanding and Medical Analysis(2025-06-26) Hu, YaxinArtificial intelligence has developed rapidly over the past decade and has penetrated into nearly every aspect of life. New applications in areas such as human-computer interaction, virtual reality, autonomous driving and intelligent medical systems have emerged in large numbers. Video is a kind of high-dimensional data, which has one more dimension than images, requiring more computing resources. As more and more high-quality large-scale video datasets are released, video understanding has become a cutting-edge research direction in the computer vision community. Action recognition is one of the most important tasks in video understanding. There are many successful network architectures for video action recognition. In our work, we focus on proposing new designs and architectures for video understanding and investigating their applications in medicine. We introduce a novel RGBt sampling strategy to fuse temporal information into single frames without increasing the computational load and explore different color sampling strategies to further improve network performance. We find that frames with temporal information obtained by fusing the green channels from different frames achieve the best results. We use tubes of different sizes to embed richer temporal information into tokens without increasing the computational load. We also introduce a novel bio-inspired neuron model, the MinBlock, to make the network more information selective. Furthermore, we propose a spatiotemporal architecture that slices videos in space-time and thus enables 2D-CNNs to directly extract temporal information. All the above methods are evaluated on at least two benchmark datasets and all perform better than the baselines. We also focus on applying our networks in medicine. We use our slicing 2D-CNN architecture for glaucoma and visual impairments analysis. And we find that visual impairments may affect walking patterns of humans thus making the video analysis relevant for diagnosis. We also design a machine learning model to diagnose psychosis and show that it is possible to predict whether clinical high-risk patients would actually develop a psychosis.Item Recurrent neural networks for discriminative and generative learning(2020) Semeniuta, Stanislau