EP4205041A1 - Système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition - Google Patents

Système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition

Info

Publication number
EP4205041A1
EP4205041A1 EP21769987.5A EP21769987A EP4205041A1 EP 4205041 A1 EP4205041 A1 EP 4205041A1 EP 21769987 A EP21769987 A EP 21769987A EP 4205041 A1 EP4205041 A1 EP 4205041A1
Authority
EP
European Patent Office
Prior art keywords
data
module
harmonization
model
data set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21769987.5A
Other languages
German (de)
English (en)
Inventor
Sebastian NIEHAUS
Daniel LICHTERFELD
Michael Diebold
Janis REINELT
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aicura Medical GmbH
Original Assignee
Aicura Medical GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aicura Medical GmbH filed Critical Aicura Medical GmbH
Publication of EP4205041A1 publication Critical patent/EP4205041A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/098Distributed learning, e.g. federated learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/02Knowledge representation; Symbolic representation
    • G06N5/022Knowledge engineering; Knowledge acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the invention relates to a system for the automated harmonization of structured data from different acquisition devices.
  • Recording devices can be, for example, imaging devices in medical technology such as tomographs or the like, but also measuring devices, analysis devices and other devices that supply data that are typically structured in relational data sets.
  • a problem for technical data processing is that even data from similar devices for the same purpose, eg data from tomographs - despite some de facto standards such as FIHR (Fast Healthcare Interoperability Resources) - do not necessarily have the same structure or the same format to have. This means that a uniform technically automated evaluation or analysis of this data - in particular an automated analysis - is only possible with difficulty.
  • a system for the automated harmonization of structured data from different collection devices which includes the following components: an input for input data sets in different data structures specific to the data acquisition device, i.e. each in a structure as supplied by a respective data acquisition device, a harmonization module, which embodies a harmonization model that is generated by machine and configured to convert a respective input data set from the respective data acquisition device-specific structure into convert at least one harmonized dataset into a globally consistent, harmonized data structure of the system, a preprocessing module embodying a preprocessing model that is machine generated and configured to convert data from a harmonized dataset in the globally consistent, harmonized data structure into data in a model-specific data structure to convert, in particular to carry out a feature reduction, so that a data set with pre-processed data in the model-specific data structure has fewer features repr äsentiert, as a corresponding data set in the globally uniform structure, and an automated processing device that is configured to automatically process pre-processed
  • the system according to the invention serves to enable its automated processing device to process data from different types of input data sets, which can originate from different sources, equally by means of one or more classification models or one or more regression models.
  • the automated processing device thus embodies one or more classification models or regression models, each of which is preferably in the form of a neural network.
  • Recording devices can be devices such as tomographs, but in particular also data processing devices that combine data from different sources into a relational data set.
  • the merged data can be anamnesis data, patient master data, laboratory values from different laboratories, image or model data from different modalities such as tomographs, etc. Accordingly, the formats of the various data may differ from each other, although they may basically relate to the same parameter such as a leukocyte count. But the structure of the relational datasets can also be different, depending on how the various partial datasets from the different sources have been merged into a respective relational dataset.
  • the input data sets can be very different, even if they can basically relate to the same data.
  • Data supplied by a detection device each form an input data record, which typically includes a number of partial data records and has a structure that deviates from a globally uniform, harmonized data structure specified for the system.
  • a capture device may be a device that generates data, e.g., image data, representing a captured image.
  • a detection device can also be a data processing device with which data from different sources are combined into a data set (which can serve as input data set for the system according to the invention).
  • the data in the partial data sets can represent, for example, recorded images or volume models, as well as patient data such as age, gender, height, weight, blood group, BMI, anamnesis, etc. or laboratory data, e.g. as the result of a blood test.
  • the subject matter of the invention is therefore a system for the automated harmonization of data sets originating from different detection devices.
  • it is about relational data sets that include data from different sources, e.g. from imaging devices in the form of partial data sets.
  • Incoming data e.g. supplied by a recording device, is first transferred by a harmonization module into a globally uniform, harmonized data structure. leads.
  • the uniformly structured data is then converted into data with a model-specific data structure by a preprocessing module.
  • This data in the model-specific data structure is finally fed to an automated processing device, e.g. a classifier or regressor, which is in the form of a parametric model (neural networks, logical regression, etc.) or a non-parametric model (decision tree, support vector machines, gradient boosting Trees etc.) can be realized.
  • a classifier or regressor which is in the form of a parametric model (neural networks, logical regression, etc.) or a non-parametric model (decision tree, support vector machines, gradient boosting Trees etc.) can be realized.
  • the automated processing facility implements a classification or a regression model.
  • Model changes in the classification model or a regression model implemented by the automated processing device are implemented in a manner known per se using prediction errors, preferably as a supervised learning algorithm.
  • the prediction error can be determined, for example, in a manner known per se using a loss function, and the classification model implemented by the automated processing device can be changed or a regression model can be changed in the case of a neural network by adjusting the weights in nodes of the layers by backpropagation .
  • the prediction error of the automated processing facility should be as small as possible.
  • the prediction error of the automated processing device is based not only on the processing of the data supplied by the pre-processing module by the automated processing device itself, but also on the processing of the input data records by the harmonization module and the processing of the harmonized data records by the pre-processing module.
  • the prediction error is therefore used not only to adapt the classification or regression model implemented by the automated processing device, but also to optimize the harmonization model embodied by the harmonization module and the pre-processing model embodied by the pre-processing module. Both the harmonization module and the preprocessing module are thus capable of learning, i.e. can be trained using machine learning.
  • the harmonization module and the pre-processing module are thus trained taking into account the prediction error of the automated processing device.
  • the harmonization module preferably embodies a trained neural network, in particular a multi-layer fully networked perceptron or a deep Q network.
  • the pre-processing module preferably embodies a trained neural network, in particular an autoencoder.
  • the harmonization module is connected to a plurality of pre-processing modules and each of the pre-processing modules is connected to an automated processing facility.
  • the or each automated processing means is connected to the harmonization module to provide feedback thereto.
  • the or each automated processing device is preferably connected to the upstream preprocessing module in order to provide feedback.
  • a network of several systems of the type described here is also proposed, in which the systems for exchanging parameter data sets are connected to one another in order to enable federated or collaborative machine learning.
  • the parameter data sets contain parameter values representing training-generated weights of the harmonization or pre-processing models embodied by the harmonization or pre-processing modules.
  • the harmonization model embodied by the harmonization module is a model for combining and assigning the data represented in the sub-data sets to sub-data sets of a uniform, harmonized data structure, which facilitates reliable processing of the data by the automated processing device.
  • the assignment decision - ie the decision as to which data from the partial data sets of the respective input data set is assigned to the partial data sets of a data set in the globally uniform, harmonized structure - is modeled as a classification.
  • the harmonization module therefore preferably embodies a classifier. This can be constructed, for example, as a 3-layer perceptron that has 12 nodes per layer that are fully networked with one another (fully connected).
  • the activation function of the nodes is preferably non-linear, for example a leaky ReLU function, the data basis for the assignment decision is data recorded in the context and the origin of the respective input data record.
  • the harmonization model is preferably not completely approximated, but is depicted as a rule-based structure that is expanded by an approximated (trained) model.
  • the harmonization module is configured to search for the most suitable partial data set of the globally uniform, harmonized data structure for a suitable assignment of partial data sets from an input data set to a partial data set of the globally uniform, harmonized data structure of the system.
  • the search is preferably implemented as a hierarchical search, the search behavior being determined by a deterministic heuristic derived from a metaheuristic or by an agent with a search behavior that was approximated via reinforcement learning.
  • the search behavior is preferably restricted deterministically by a reward function, which is composed of the feedback from the automated processing device and a defined set of rules.
  • the feedback from the automated processing device can be, for example, the loss determined using the loss function, which results as a result of the prediction error as it occurs as part of the supervised learning of the automated processing device.
  • the search space within which the harmonization module searches for a suitable assignment is specified by the hierarchical structure of the specified globally uniform, harmonized data structure of the system, which is the aim of the harmonization.
  • the specified globally uniform, harmonized data structure of the system represents the environment for the preferred reinforcement learning (reinforcement learning).
  • reinforcement learning the training of the harmonization module can be limited by specified action spaces and thus optimized.
  • the given action spaces for reinforcement learning can represent a defined set of rules. This can also be implemented as a dictionary for the assignment of the partial data sets of a respective input data set to partial data sets of the specified globally uniform, harmonized data structure.
  • the automated processing device that supplies the feedback for the training of the harmonization module can be a black box function that only returns an evaluation of the input parameters and a deviation for the target value.
  • both the harmonization model embodied by the harmonization module and the preprocessing model embodied by the preprocessing module are optimized by means of the feedback from the automated processing device - not simultaneously, but sequentially - i.e. only one module at a time.
  • feedback from the automated processing device ie for example the classifying neural network, is used, in particular the loss. This should be as low as possible.
  • the first module that processes the incoming data is the harmonization module.
  • This can, for example, embody a metaheuristic that forms a (decision) tree structure.
  • points weightings
  • the strongest node connections i.e. those with the highest weight or most points, are ultimately retained and form a deterministic heuristic after training.
  • the node connections are adapted until a suitable deterministic heuristic has developed.
  • the metaheuristic can be an original decision tree with all possible node connections present.
  • the training results in a deterministic heuristic, which can be a decision tree that only has unique edges.
  • Such a deterministic heuristic can also be generated manually, but this would be very time-consuming.
  • a metaheuristic is used instead, which enables a heuristic search.
  • the harmonization model is a metaheuristic that forms a tree structure that develops during the training (see above: points are given for the respective node connections in order to let less relevant node connections "die off” in this way)
  • the optimization is initially stochastic , in which features from the system-specific structure are randomly mapped to features in the globally uniform structure and then finally the resulting classification result is considered and the structure is designed and optimized, at least initially, using a kind of trial-and-error method. Harmonization models generated in this way, e.g.
  • deterministic heuristics with a tree structure generated from a metaheuristic by means of training can be collected and aggregated for various systems that are otherwise not locally connected to each other and made available to other systems, so that a locally generated harmonization model be compared with one (or more) locally stored harmonization models with regard to the classification success through automated processing.
  • Different harmonization models of different harmonization modules can be approximated decentrally over several instances by means of federated or collaborative learning by exchanging parameter data sets between the harmonization modules, which contain the parameter values resulting from the training, in particular the weightings of the nodes of a respective neural network.
  • the data communication for exchanging such parameter data records between the individual harmonization modules can take place via a global server (see FIGS. 5 or 6) or directly from module to module.
  • a prerequisite for such a federated or collaborative training of different harmonization or also preprocessing modules is that the respective modules embody models with the same topology or structure.
  • the harmonization model can also be generated via reinforcement learning, which is based on a Markov model with states, state transitions and a virtual agent that brings about state transitions.
  • the environment for this reinforcement learning is fixed.
  • the environment consists on the one hand of the input data sets specified during training with their partial data sets and on the other hand of the specified globally uniform data structure onto which the partial data sets and the data contained therein are to be mapped.
  • the trained harmonization module embodies mapping rules for mapping the single Going data in their respective system-specific data structure on the globally uniform data structure.
  • the mapping rules can be defined by a heuristic search or a neural network trained using reinforcement learning.
  • the harmonization module can be the same for several classification models and can therefore be optimized with feedback from several classification models (maximum likelihood method).
  • the harmonization model is preferably implemented in the form of a deep Q network (Deep GI network).
  • This has the topology of a multilayer perceptron with an input layer and an output layer and two hidden layers in between.
  • the perceptron is trained using reinforcement learning, especially Q-learning, and is therefore a deep Q-network.
  • Training using Q-Learnings implies agents that can bring about state transitions, for example the assignment of a partial data set of the input data set to a partial data set of the harmonized data set.
  • the training is based on the fact that as a result favorable (advantageous) state transitions are rewarded with a reward for the agent.
  • an action space can be specified for a respective agent, so that the agent does not receive a reward for state transitions outside of the action space.
  • the areas of action specified within the framework of Q-Learning represent a rule basis on which the harmonization model and thus the harmonization module are based.
  • Such a rule base is preferably specified, since this accelerates the training and helps to avoid incorrect assignments.
  • the reward also depends on the feedback that is returned to the harmonization model by the automated processing facility according to the invention.
  • This feedback depends on the prediction error (in particular the loss) that results when training the automated processing device on the basis of training data sets (ground truth).
  • the prediction error of an automated processing device designed as a classifier or regressor during training does not depend directly on the training data sets used as input data sets, since these input data sets are first processed by the harmonization module and by the pre-processing module before they are fed to the automated processing device.
  • the respective prediction error, on which the feedback on the monization module and the pre-processing module is based so depends on the processing of the input data records in the harmonization module, in the pre-processing module and in the automated processing device.
  • the harmonization module or the pre-processing module is trained at the same time as the automated processing device is trained on the basis of input data records which form a ground truth.
  • the corresponding prediction error or loss can be determined by comparing the classification result or the regression result, which the automated processing device supplies, with the ground truth data.
  • the feedback from the automated processing device is not sent to both the harmonization module and the pre-processing module at the same time, but only to one of the two modules, so that either the harmonization module or the pre-processing module is trained together with the automated processing device.
  • the globally uniform, harmonized structure of the data sets that the harmonization module supplies as an output is specified and can be FHIR-compliant, for example.
  • the pre-processing module is preferably configured to perform feature reduction via Principle Component Analysis (PCA). This can be done, for example, by the preprocessing module embodying an autoencoder that maps larger feature vectors to smaller feature vectors.
  • PCA Principle Component Analysis
  • the input layer of the autoencoder would then have as many nodes as the input vector has dimensions and the output layer of the autoencoder would have a correspondingly smaller number of output nodes.
  • the pre-processing model e.g. the autoencoder
  • the pre-processing model is also trained using the feedback from the automated processing device, e.g. a classifier that embodies a classification model in the form of a classifying neural network, in order to arrive at pre-processed data sets in a model-specific data structure that a classification that is as good as possible through the automated processing device.
  • the embodied by a respective preprocessing moduleau pre-processing model is specific to a respective classification model of the automated processing device, as can be seen in Figure 4, for example.
  • the preprocessing module is preferably configured to convert data from a partial data set of a harmonized data set into a partial data set in which the data is present with reduced features.
  • the automated processing device providing the feedback can be a black box function, which only returns an evaluation of the input parameters and a deviation for the target value.
  • the system additionally has a module, in particular a transformer module, for generating a low-level representation of a respective input data record.
  • the low-level representation of a respective input data record represents the structure of the input data record abstracted from the values contained in the input data record, in which the values are embedded.
  • Low-level representation of a respective input data set can be supplied to the harmonization module in addition to the input data set itself in order to improve the transformation of the input data set into a data set in the globally uniform structure.
  • the system also has a second module, in particular a transformer module, for generating multiple low-level representations of a harmonized data set and a pattern matching module that is configured to match those of the feature-reduced, abstracted representations of the global target structure in question that best fits the low-level representation of the input data set.
  • a transformer module for generating multiple low-level representations of a harmonized data set
  • a pattern matching module that is configured to match those of the feature-reduced, abstracted representations of the global target structure in question that best fits the low-level representation of the input data set.
  • a transformer module can be implemented as a neural network in the form of a transformer model.
  • Transformer models are known to those skilled in the art and have an encoder-decoder structure with an encoder part and a decoder part.
  • the encoder part generates increasingly abstract feature vectors from an input data set, which the encoder part converts back into output data sets that are concrete representations. represent sentiments.
  • the layers (hidden layers) of the encoder part are each assigned self-attention layers; see http://jalammar.github.io/illustrated-transformer/
  • a transformer module that implements a transformer model for generating multiple low-level representations of a harmonized data set has the property that its encoder part has multiple low-level representations of the input data set due to the self-attention layers of the transformer. According to a preferred embodiment, this property is used to perform a pattern matching between a low-level representation of the input data record of the system with different low-level representations of a data record in the globally uniform structure, which the second transformer from the data record in the global uniform structure as the input data record of the second transformer.
  • FIG. 1 shows a schematic overview of the system according to the invention
  • Fig. 3 a sketch that explains the training of the pre-processing module
  • 5 is a sketch illustrating the training of the harmonization module based on feedback from various automated processing devices; 6: a sketch that illustrates how trained pre-processing models of different pre-processing modules can be optimized in the manner of federated learning; and
  • Fig. 7 a sketch that illustrates how trained harmonization models can be optimized by different harmonization modules in the manner of federated learning.
  • FIG. 1 shows a system 10 for the automated harmonization of structured data from various acquisition devices.
  • the system has an input 12 for an input data set 14 in a detector-specific structure, i.e. in a structure as provided by a respective detector.
  • the system further comprises a harmonization module 16, which embodies a harmonization model, which is generated by machine and is configured to convert the data from the respective registration device-specific structure into at least one harmonized data set 18, a globally uniform data structure of the system.
  • a harmonization module 16 which embodies a harmonization model, which is generated by machine and is configured to convert the data from the respective registration device-specific structure into at least one harmonized data set 18, a globally uniform data structure of the system.
  • the structure of a record is referred to herein simply as a structure or data structure.
  • a harmonized data set 18 in a globally uniform structure of the system thus has a harmonized data structure.
  • the system also has a pre-processing module 20 embodying a pre-processing model that is machine generated and configured to convert data from a harmonized data set 18 in the globally uniform, harmonized structure into pre-processed data 22 in a model-specific data structure, in particular to perform feature reduction , so that pre-processed data 22 in a pre-processed data set in the model-specific data structure comprises fewer entries than a corresponding data set in the globally uniform, harmonized structure.
  • a pre-processing module 20 embodying a pre-processing model that is machine generated and configured to convert data from a harmonized data set 18 in the globally uniform, harmonized structure into pre-processed data 22 in a model-specific data structure, in particular to perform feature reduction , so that pre-processed data 22 in a pre-processed data set in the model-specific data structure comprises fewer entries than a corresponding data set in the globally uniform, harmonized structure.
  • the system has an automated processing device 24, which is configured to automatically process, in particular to classify, preprocessed data 22 in the model-specific data structure and to generate a loss measure representing a possible processing inaccuracy (loss) or a possible prediction error (prediction error) and as feedback 26 optionally to the harmonization module 16 or the preprocessing module 20 to output.
  • the automated processing device 24 delivers, for example, as an output value, a membership or a membership probability of the input data set to a class—for example a disease—for which the automated processing device was trained.
  • the automated processing device 24 is configured, for example, to determine an association probability value that represents an association probability determined for a class, for example. These membership probability values represent a prediction that may be compared during supervised learning to ground truth training data from corresponding input data sets to the system 10 to determine prediction error and/or loss.
  • the automated processing device 24 can transmit the prediction error or the loss back to the harmonization module 18 or to the pre-processing module 20 as feedback. This allows both the harmonization module 18 and the preprocessing module 20 to automatically optimize the system 10 during training in such a way that the probability of membership determined by the automated processing device 24 for each class is as large as possible and the prediction error and/or loss is as small as possible.
  • An input data record 14 in an acquisition device-specific structure is a heterogeneous relational data record that is composed of a number of heterogeneous partial data records and can be present in an XML format, for example.
  • an input data record can contain an image data record as a partial data record that represents an image or volume model represented by pixels or voxels.
  • Another partial data record of this input data record can contain metadata about the image data record, for example data representing the recording time, the recording medium (the modality), recording parameters such as the increment or the energy, etc.
  • Another partial data set can represent, for example, laboratory results of a blood test or an EKG of the same patient to which the other partial data sets also belong.
  • the input data record 14 can contain anamnesis data (admission diagnosis, previous illnesses, age, place of residence, BMI, allergies, etc.) and various laboratory values (number of leukocytes, various antibody concentrations, etc.) for each patient.
  • anamnesis data asmission diagnosis, previous illnesses, age, place of residence, BMI, allergies, etc.
  • various laboratory values number of leukocytes, various antibody concentrations, etc.
  • the harmonization module 16 The input data sets 14 from different sources—that is, for example, from different clinics—can have very different structures and also contain different types of partial data sets.
  • the function of the harmonization module 16 is to convert different input data sets 14 into at least one harmonized data set 18 in a uniform, harmonized data format and thus to generate a harmonized data set 18 for each input data set 14 .
  • the harmonization module 16 can, for example, embody a deterministic heuristic which, in the manner of an assignment tree, assigns data from the partial data sets of the input data set to corresponding partial data sets of a harmonized data set.
  • the deterministic heuristic is generated from a meta-heuristic that represents a general tree structure in which many nodes of an assignment tree are connected to many other nodes via many node connections. The number of node connections is then reduced as part of the supervised learning in order to bring about a determinate assignment of partial data sets of an input data set to partial data sets of a harmonized data set.
  • the deterministic heuristic can also be approximated by a neural network—that is, implemented in the form of a neural network.
  • a suitable network is, for example, a fully networked perceptron that is trained by means of reinforcement learning (reinforcing learning).
  • a deep Q-network that is trained using Q-learning is particularly suitable.
  • Q-learning is a form of reinforcement learning in which the agents on which the q-learning algorithm is based can be given action spaces. These action spaces define a given rule base and structure a decision tree given by the metaheuristic.
  • the Q-learning algorithm is based on virtual agents that bring about state transitions (corresponding to the transitions in the decision tree) and receive a higher reward if the state transitions brought about lead to a better result - i.e.
  • a 34-layer perceptron with 12 nodes per layer is suitable for implementing a deep Q network. Such a perceptron has an input layer, an output layer and two intervening hidden layers. The 12 nodes of each layer are fully networked with the nodes of the adjacent layer(s).
  • the activation function of the nodes is preferably non-linear, for example a ReLU function and in particular a leaky ReLU function.
  • the harmonization module 16 can also embody a Bayesian network, in particular a Markov model and above all a hidden Markov model, which was generated by means of supervised learning.
  • the Bayes network or the Markov model can also be approximated by a perceptron - ie implemented in the form of a perceptron and trained by supervised learning.
  • the prediction errors occurring during the training of the automated processing device are transmitted back to the harmonization module and the deterministic heuristic or the Markov model or the perceptron representing them is trained by means of reinforcement learning (reinforcement learning) in such a way that the harmonized data sets generated by the harmonization module lead to the smallest possible prediction error or loss for a respective class.
  • reinforcement learning reinforcement learning
  • both the type of representation (coding) of the leukocyte counts and the data structure, containing the representing data may be different. Accordingly, the input data sets originating from different clinics can differ both with regard to the form of the data and with regard to the position in which the data is stored in the data set.
  • an automated processing device eg a classifier or regressor formed by a neural network
  • the different input data sets must be converted into a globally uniform, harmonized data structure that is specified for the system.
  • the aim of the classification or regression using the automated processing device 24 can be, for example, to determine the risk of infection with hospital germs and/or the expected length of stay and/or to determine a score for the expected risk of hospital germs based on the data of a respective input data record.
  • each input data set 14 is first fed to the harmonization module 16 .
  • This embodies a trained harmonization model; see figure 1 .
  • the harmonization model is trained with the aid of the feedback from the automated processing device 24 in such a way that the harmonization module 16 recognizes partial data sets of an input data set and converts them into a suitable partial data set of the globally uniform, harmonized data structure of the system; see figure 2.
  • the harmonization model is trained with the aid of feedback from the automated processing device in such a way that the harmonization module recognizes the similarity between the values represented by the data and the Data is thus converted into a uniform form of representation (code system).
  • the harmonization model is trained for the number of leukocytes in such a way that it divides the data representing values into two forms of representation (code systems) - i.e. into two different partial data sets of the globally uniform, harmonized data structure of the system.
  • the reason for this is that treating the values represented in different ways in the same way - even if they each represent leukocyte counts - leads to a poorer classification with a lower probability of belonging. Equivalent treatment of the values from the different measurement methods results in a poorer membership probability value (poorer reward, larger loss), because the classifier cannot map differently represented values to individual classes as precisely.
  • the assignment to different partial data sets results in the partial data sets also being classified differently, ie being supplied to a different classification model in each case. Alternating classification models ensure that there is no overfitting in favor of one classification model.
  • the exchange between the clinics makes it possible to use parameters that have already been trained and thus to use a transfer effect.
  • the pre-processing model 20 takes care of a selection of the relevant parameters and translates both leukocyte value types into a uniform format.
  • the relevant parameters are model-specific.
  • the harmonized data sets 18 are fed to the pre-processing module 20; see figure 1 .
  • the pre-processing module 20 is designed to convert at least some partial data sets of a respective harmonized data set 18 into pre-processed data 22 in a model-specific data structure, in particular to carry out a feature reduction which is model-specific insofar as it is based on a (multi-class) classification model represented by the automated processing device 24 is adapted because the pre-processing model was (only) trained with the feedback from the respectively downstream automated processing device 24 .
  • the preprocessing module 20 is configured to carry out a feature reduction for those partial data sets which contain image data representing pixels or volume data representing voxels.
  • Such partial datasets can represent, for example, a large number of features caused by noise, which can be eliminated by way of feature reduction, so that a preprocessed partial dataset of the preprocessed, model-specific dataset represents, for example, a less noisy image.
  • the pre-processing module 20 can be configured to carry out a principal component analysis, for which the pre-processing module can be designed as an autoencoder.
  • a principal component analysis for which the pre-processing module can be designed as an autoencoder.
  • Possible implementations are, for example, in Kramer, MA: “Nonlinear principal component analysis using autoassociative neural networks.” AIChE Journal 37 (1991), No. 2, pp. 233-243 or Matthias Scholz "Nonlinear principal component analysis based on neural networks", diploma thesis, Humboldt University of Berlin, 2002.
  • the purpose of the model-specific processing of a respectively unified, harmonized data set 18 by the pre-processing module 20 is to prepare data from certain sub-data sets of the harmonized data structure for subsequent processing by the automated processing device.
  • the pre-processing module embodies an autoencoder, this can be trained to use Lab- Or data from a respective partial data set of the harmonized data set is scaled to a uniform scale.
  • the autoencoder is additionally or alternatively trained in such a way that it only reproduces individual laboratory data on the output layer and thus as a result filters the laboratory data that is sent to the input layer of the autoencoder so that only for the subsequent processing by the automated processing facility, more relevant laboratory data are passed on to it.
  • the autoencoder embodied by the preprocessing module can also be trained to suppress noise represented in the image data or to enhance contrasts in the image data, in order in this way to reproduce a matrix-like representation of the respective image on the output layer , which results in more reliable processing by the downstream automated processing facility.
  • the preprocessing module 20 is also initially trained by means of feedback from the respective downstream automated processing device 24, but not at the same time as the harmonization module 16; see figure 3.
  • the pre-processing module 20 which embodies an autoencoder, is also trained on the basis of the feedback from the automated processing device to the effect that the prediction error of the automated processing device compared to the ground truth (which is generated by the input data sets during the training of the system 10 made up of harmonization module 16, pre-processing module 20 and automated processing device 24 is given) is as small as possible.
  • a loss determined using the known loss function can be used as a measure of the prediction error and used as feedback for training the harmonization module 16 or the preprocessing module 20 .
  • the harmonization module 16 embodies, for example, a perceptron that is trained using Q-learning and thus represents a deep Q network as a result
  • the preprocessing module 20 embodies, for example, an autoencoder that is trained using backpropagation. Both the training of the harmonization module 16 and the training of the preprocessing module 20 are also based on the prediction error that the automated processing device 24 (as a classifier or regressor) delivers compared to the input data sets used in the training of the system, which represents a ground truth.
  • the input data records with different structures contain data (values) that are embedded in different structures. This means that values for the same parameters can not only differ in their data format, but can also be in different positions in the respective input data set. In order to transfer the input data records into a globally uniform structure, the values must be transferred from the respective position in the input data record to the corresponding position in the data record in the globally uniform, harmonized structure.
  • an extended system 10' is provided for the automated harmonization of structured data from different acquisition devices, as is shown in FIG. 4 by way of example.
  • the extended system 10' has additional components which serve to reduce a respective input data set to its structural features by converting the respective input data set into a low-level representation and which are compared and evaluated using pattern matching with low-level representations of the datasets in a globally uniform, harmonized structure.
  • a transformer model is a form of neural network with an encoder-decoder structure.
  • the first hidden layers of the Transformer model that follow the input layer form an encoder and generate increasingly abstract feature vectors from the input data, which are then usually processed back into more concrete output data sets in a decoder part of the Transformer model.
  • the layers (hidden layers) of the encoder part are each assigned self-attention layers; see http://jalammar.github.io/illustrated-transformer/
  • the feature vectors generated by the encoder part of the transformer model represent feature-reduced low-level representation 32 of the input data set, which is used for the extended system 10′ proposed here.
  • this expanded system 10' only the encoder part of a transformer model known per se is used to generate a low-level representation 32 of the input data set.
  • An autoencoder can also be provided instead of the transformer module, in which case only its encoder part is required and used here as well.
  • the first transformer module 30 thus generates a low-level representation 32 of the input data from an input data set. ten set, the first transformer module being trained in such a way that the low-level representation 32 of the input data set represents the structure of the input data set 14 abstracted from the values contained in the input data set 14 .
  • the data records 18 in a globally uniform, harmonized structure are also converted into various feature-reduced, abstracted representations 36 of the global with the aid of a second transformer model 34 eligible target structures transferred.
  • a transformer module that implements a transformer model for generating multiple low-level representations of a harmonized data set has the property that its encoder part has multiple low-level representations of the input data set due to the self-attention layers of the transformer. This property is used to perform a pattern matching between a low-level representation 32 of the input data set 14 of the system with different low-level representations 36 of a data set in the globally uniform structure, which the second transformer from the data set 18 in the global uniform structure as the input data record of the second transformer.
  • Both the low-level representation 32 of a respective input data set 14 and the various feature-reduced, abstracted representations 36 of the global target structures in question are fed to a pattern matching module 38, which is configured to match that of the feature-reduced, abstracted representations 36 of the candidate to determine the upcoming global target structure that best fits the low-level representation 32 of the input data set 14 .
  • the feature-reduced, abstracted representations 36 of the global target structures in question are derived from the data sets 18 in a globally uniform, harmonized structure, the low-level representation 32 of the input data set 14 and the most similar feature-reduced, abstracted representations 36 of the possible global target structures, the best assignment of the values from the input data set 14 to the appropriate target positions in the globally uniform, harmonized (target) structure.
  • Each representation 36 of the global candidate target structures is a low-level representation made up of abstract feature vectors representing possible positions in the globally uniform, harmonized (target) structure 18 .
  • the abstract feature vectors (low-level representations) of the possible positions are compared by the pattern matching module 38 using a similarity metric with the low-level representation 32 of the input data sets.
  • the similarity metric can be implemented as a distance measure, for example, or as an approximated function by a neural network.
  • the best position determined using the similarity metric is then selected as the target position for the corresponding values from the input data set 14 .
  • the result of the pattern matching is thus the positions of values from the input data record 14 in the corresponding data record 18 in a globally uniform, harmonized structure.
  • the target positions obtained with the aid of the pattern matching module 38 for an input data record 14 are then fed to the input layer of the harmonization module 16 together with the input data record 14 .
  • the harmonization module 16 then generates the desired data set 18 in a globally uniform, harmonized structure, which can then be further processed as described in connection with FIGS.
  • each automated processing device 24.1, 24.2 and 24.3 is preferably preceded by its own preprocessing module 20.1, 20.2 and 20.3 in order to preprocess the data for the respective classification or regression model embodied by the automated processing device in a model-specific manner.
  • the models embodied by the harmonization module 16, the pre-processing module 20 and the automated processor 24 can typically be described by their structure or topology and by their parameterization.
  • the structure and topology of the respective neural network can be defined by a structure data record that contains, for example, information about how many layers the neural network has and what type these layers are, how many nodes each layer has and how they are connected to each other nodes of adjacent layers are networked, which activation function each node implements, etc.
  • a Such a structure data set defines the neural network both in the untrained and in the trained state.
  • the weightings are formed in the individual nodes, which determine how strongly output values from nodes in previous layers are taken into account by a node in a subsequent layer that is connected to them.
  • the parameter values that form as a result of the training of the neural network can be stored in a parameter data record. This makes it possible, for example, to transfer parameter values from a trained harmonization module 16 or preprocessing module 20 to another previously untrained harmonization module 16 or preprocessing module 20, provided that the harmonization or preprocessing models embodied in each case have the same structure defined by a structural data set.
  • both the harmonization models and the pre-processing models are approximated decentrally and across multiple instances using federated or collaborative learning. This is shown in Figures 6 and 7.
  • the communication between individual preprocessing modules 20 or individual harmonization modules 16 can either take place directly from module to module or via a global server, which is shown in FIGS. 6 and 7 as a cloud.
  • the harmonization module has the structure of a four-layer perceptron with an input layer, two hidden layers and an output layer. Each of the layers has twelve nodes and the layers are fully connected to each other.
  • the activation function of the nodes is preferably a leaky ReLU function (ReLU: rectified linear unit).
  • a structure data set associated with the harmonization module 16 describes such a four-layer perceptron. For example, if the four-layer perceptron is trained using reinforcement learning, the harmonization module 16 may also embody a deep Q network (DQN).
  • DQN deep Q network
  • the respective pre-processing module 20 preferably embodies an autoencoder for the principal component analysis.
  • the autoencoder has an input layer and an output layer and intervening hidden layers, for example three hidden layers.
  • the hidden layers have fewer nodes than the input and output layers.
  • such a Autoencoder designed to optimize the weightings in the nodes of the individual layers in such a way--for example by backpropagation--that, for example, a pixel matrix given to the input layer is reproduced as similarly as possible by the output layer. That is, the deviation of the values of the corresponding nodes of the input layer and the output layer is minimized.
  • the weightings that form at the nodes of a middle (hidden) layer as part of the training represent the main basic components of the input matrix.
  • the middle layer has fewer nodes than either the input or the output layer.
  • the input layer and the output layer each have the same number of nodes.
  • a respective input data record can contain, for example, anamnesis data for a patient (admission diagnosis, previous illnesses, age, place of residence, BMI, allergies, etc.) and various laboratory values (number of leukocytes, various antibody concentrations, etc.). In some cases, EKGs and medical images are also available for patients.
  • the task of the automated processing devices is, for example, to determine the risk of infection with hospital germs on the basis of the input data sets, to determine the probable length of stay and to determine an expected value (score) for the probable risk of hospital germs.
  • a separate automated processing device 24.1, 24.2 and 24.3 can be provided for each of these tasks (see FIG. 4), each of which embodies a decision model, namely a classifier or regressor, for example.
  • Each of the decision models can be implemented as a parametric model (neural networks, logical regression, etc.) or as a non-parametric model (decision tree, support vector machines, gradient boosting trees, etc.).
  • the model changes are implemented based on prediction errors, preferably as a supervised learning algorithm.
  • the first task is to convert the input data sets into a harmonized data set format. This is done with the help of the harmonization module 16 and the harmonization model embodied by it (which can be, for example, a perceptron trained in the way of reinforcement learning, see above).
  • the harmonization model is updated based on the prediction errors of the three automated processing devices 24.1, 24.2 and 24.3.
  • the harmonization model 16 which is implemented as a deep Q network (DQN) is preferably updated by means of reinforcement learning via a reward based on the error values of the automated processing devices 24.1, 24.2 and 24.3 embodied decision models.
  • DQN deep Q network
  • a tree search is initially used, which classifies the different data formats and data standards into a global standard. The reward increases if the allocation leads to a constant improvement in the harmonization model in all clinics.
  • the harmonization model 16 is trained by dividing the values into two code systems. Equivalent treatment of the values from the different measurement methods results in a poorer reward. The changing decision models ensure that there is no overfitting in favor of one model.
  • the DQN models are trained in a federated learning setup (see Figure 7), which reduces clinical bias. The exchange between the clinics makes it possible to use parameters that have already been trained and thus achieve a transfer effect.
  • the respective pre-processing module 20.1, 20.2 or 20.3 ensures a selection of the relevant parameters and translates both leukocyte value types into a uniform format.
  • the relevant parameters are specific to the respective automated processing device and the decision model embodied by it.
  • the preprocessing model embodied by the preprocessing module can be implemented as an autoencoder, which is also trained in a federated manner, see Figure 6. Reference sign
  • pre-processing module 22 data set with pre-processed data

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Molecular Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Probability & Statistics with Applications (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition, le système comprenant les composants suivants : une entrée pour des ensembles de données d'entrée dans différentes structures de de données spécifiques de dispositif d'acquisition, à savoir dans chaque cas dans une structure telle que fournie par un dispositif d'acquisition pertinent, un module d'harmonisation qui forme un modèle d'harmonisation généré par machine et conçu pour transférer un ensemble de données d'entrée pertinent à partir de la structure spécifique de dispositif d'acquisition de système pertinent dans au moins un ensemble de données harmonisées dans une structure de données harmonisées globalement unifiée du système, un module de prétraitement qui forme un modèle de prétraitement généré par machine et conçu pour transférer des données à partir d'un ensemble de données harmonisées dans la structure de données harmonisées globalement unifiée en données dans une structure de donnés spécifique de modèle, en particulier pour effectuer une réduction de caractéristiques de sorte qu'un ensemble de données ayant des données prétraitées dans la structure de données spécifique de modèle représente des caractéristiques inférieures qu'un ensemble de données correspondant dans la structure globalement unifiée, et un dispositif de traitement automatisé qui est conçu pour traiter, de manière automatisée, des données prétraitées dans la structure de données spécifique de modèle, en particulier pour classer lesdites données, et pour générer une mesure de perte représentant une éventuelle imprécision de traitement (perte), et pour produire ladite mesure de perte soit pour le modèle d'harmonisation soit pour le modèle de prétraitement.
EP21769987.5A 2020-08-31 2021-08-31 Système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition Pending EP4205041A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020122749.3A DE102020122749A1 (de) 2020-08-31 2020-08-31 System zur automatisierten Harmonisierung strukturierter Daten aus verschiedenen Erfassungseinrichtungen
PCT/EP2021/074031 WO2022043585A1 (fr) 2020-08-31 2021-08-31 Système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition

Publications (1)

Publication Number Publication Date
EP4205041A1 true EP4205041A1 (fr) 2023-07-05

Family

ID=77750287

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21769987.5A Pending EP4205041A1 (fr) 2020-08-31 2021-08-31 Système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition

Country Status (4)

Country Link
US (1) US20240220815A1 (fr)
EP (1) EP4205041A1 (fr)
DE (1) DE102020122749A1 (fr)
WO (1) WO2022043585A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115730068B (zh) * 2022-11-16 2023-06-30 上海观察者信息技术有限公司 基于人工智能分类的检测标准检索系统和方法

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10095716B1 (en) 2017-04-02 2018-10-09 Sas Institute Inc. Methods, mediums, and systems for data harmonization and data harmonization and data mapping in specified domains
JP7065498B2 (ja) * 2018-02-03 2022-05-12 アレグロスマート株式会社 データオーケストレーションプラットフォーム管理

Also Published As

Publication number Publication date
US20240220815A1 (en) 2024-07-04
DE102020122749A1 (de) 2022-03-03
WO2022043585A1 (fr) 2022-03-03

Similar Documents

Publication Publication Date Title
DE102015212953A1 (de) Künstliche neuronale Netze zur Klassifizierung von medizinischen Bilddatensätzen
WO2018094438A1 (fr) Procédé et système de création d'une base de données image médicale au moyen d'un réseau de neurones à convolution
DE112018002822T5 (de) Klassifizieren neuronaler netze
DE102015217429A1 (de) Diagnosesystem und Diagnoseverfahren
DE202018006897U1 (de) Dynamisches, selbstlernendes System für medizinische Bilder
DE112019002206B4 (de) Knockout-autoencoder zum erkennen von anomalien in biomedizinischen bildern
DE112020004049T5 (de) Krankheitserkennung aus spärlich kommentierten volumetrischen medizinischen bildern unter verwendung eines faltenden langen kurzzeitgedächtnisses
DE112017005651T5 (de) Vorrichtung zur Klassifizierung von Daten
DE112018006488T5 (de) Automatisierte extraktion echokardiografischer messwerte aus medizinischen bildern
DE102018128531A1 (de) System und Verfahren zum Analysieren einer durch eine Punktwolke dargestellten dreidimensionalen Umgebung durch tiefes Lernen
WO2000063788A2 (fr) Reseau semantique d'ordre n, operant en fonction d'une situation
DE102021133631A1 (de) Gezielte objekterkennung in bildverarbeitungsanwendungen
DE112021000392T5 (de) Leistungsfähiges kommentieren der grundwahrheit
DE102016213515A1 (de) Verfahren zur Unterstützung eines Befunders bei der Auswertung eines Bilddatensatzes, Bildaufnahmeeinrichtung, Computerprogramm und elektronisch lesbarer Datenträger
EP3719811A1 (fr) Consistance des identifications de données dans le traitement des images médicales permettant la classification de cellules
DE102020210352A1 (de) Verfahren und Vorrichtung zum Transferlernen zwischen modifizierten Aufgaben
DE102018206108A1 (de) Generieren von Validierungsdaten mit generativen kontradiktorischen Netzwerken
EP4081950A1 (fr) Système et procédé d'assurance qualité de modèles basée sur des données
DE112021004926T5 (de) Bildkodiervorrichtung, bildkodierverfahren, bildkodierprogramm,bilddekodiervorrichtung, bilddekodierverfahren, bilddekodierprogramm,bildverarbeitungsvorrichtung, lernvorrichtung, lernverfahren, lernprogramm, suchvorrichtung für ähnliche bilder, suchverfahren für ähnliche bilder, und suchprogramm für ähnliche bilder
EP4016543A1 (fr) Procédé et dispositif de fourniture des informations médicales
WO2022043585A1 (fr) Système pour l'harmonisation automatisée de données structurées à partir de différents dispositifs d'acquisition
DE112021005678T5 (de) Normieren von OCT-Bilddaten
DE102021124256A1 (de) Mobile ki
DE102021207613A1 (de) Verfahren zur Qualitätssicherung eines Systems
DE102017208626A1 (de) Liquide Belegschafts-Plattform

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230331

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)