CN113986890A - Joint hospital data migration method and system based on few-sample model learning - Google Patents

Joint hospital data migration method and system based on few-sample model learning Download PDF

Info

Publication number
CN113986890A
CN113986890A CN202111637599.6A CN202111637599A CN113986890A CN 113986890 A CN113986890 A CN 113986890A CN 202111637599 A CN202111637599 A CN 202111637599A CN 113986890 A CN113986890 A CN 113986890A
Authority
CN
China
Prior art keywords
hospital
data
training
learning
meta
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111637599.6A
Other languages
Chinese (zh)
Other versions
CN113986890B (en
Inventor
王佳昊
齐秀秀
李文雄
陈大江
朱军
向平
包晓乐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Hwadee Information Technology Co ltd
Original Assignee
Sichuan Hwadee Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Hwadee Information Technology Co ltd filed Critical Sichuan Hwadee Information Technology Co ltd
Priority to CN202111637599.6A priority Critical patent/CN113986890B/en
Publication of CN113986890A publication Critical patent/CN113986890A/en
Application granted granted Critical
Publication of CN113986890B publication Critical patent/CN113986890B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/21Design, administration or maintenance of databases
    • G06F16/214Database migration support
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/28Databases characterised by their database models, e.g. relational or object models
    • G06F16/284Relational databases
    • G06F16/285Clustering or classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention belongs to the technical field of data processing, and discloses a joint hospital data migration method and system based on less-sample model learning, which comprises the following steps: constructing a case data model by using a source hospital set with rich data sources, and constructing a task set from the source hospital set with rich data sources based on a task allocation algorithm; performing multiple times of case sampling on each sample in each task, and preprocessing the acquired case data to obtain a feature set; performing time sequence training on the feature set according to the time sequence, and extracting a feature value of a complete time sequence; training a case data model by using the characteristic values to obtain a case data model; and carrying out data identification on the target hospital data set with sparse data source by using the case data model to obtain data types, and transferring the data of the types to a target hospital database. The invention can accurately migrate abundant medical resource data for the target case of the target hospital with sparse resources.

Description

Joint hospital data migration method and system based on few-sample model learning
Technical Field
The invention belongs to the technical field of data processing, and particularly relates to a joint hospital data migration method and system based on less-sample model learning.
Background
Data acquisition with unbalanced spatial distribution is the most frequently encountered problem for spatio-temporal prediction. For example, some medical institutions may provide constantly rich medical resource data due to sufficient resources, etc., while some institutions may only have a small amount of valid medical data due to insufficient medical equipment, etc., and when the medical institutions with insufficient resources cannot accurately and effectively acquire resource data related to rare cases, such as doctor data, department data, and contra-case data, etc.
Early methods only transferred knowledge from a single source (hospital), resulting in unstable results and risks of negative migration. Existing methods are difficult to learn efficiently in situations where data is limited due to lack of value or the effects of special events (e.g., missing or insufficient case data).
Disclosure of Invention
In order to solve the problems, the invention provides a joint hospital data migration method and system based on less-sample model learning, which can accurately migrate abundant medical resource data of target cases of target hospitals with sparse resources.
In order to achieve the purpose, the invention adopts the technical scheme that: a joint hospital data migration method based on few-sample model learning comprises the following steps:
s10, constructing a case data model by using the source hospital set with rich data sources, comprising the following steps:
s11, constructing a task set from a source hospital set with rich data sources based on a task allocation algorithm;
s12, performing multiple times of case sampling on each sample in each task, and preprocessing the acquired case data to obtain a feature set;
s13, performing time sequence training on the feature set according to the time sequence, and extracting a feature value of a complete time sequence;
s14, training a case data model by the characteristic value to obtain a case data model;
and S20, performing data identification on the target hospital data set with sparse data source by using the case data model to obtain data types, and transferring the resource data of the types to a target hospital database.
Further, in step S14, training a case data model with the feature values to obtain a case data model, includes the steps of:
s141, dividing a source hospital set with rich data sources into a basic source hospital set and a new hospital set, and respectively performing basic training and meta-training;
s142, in the basic training stage, learning a basic learner and obtaining a backbone network parameter as basic prior knowledge; in the meta-training stage, network training parameters are obtained by training a new hospital set and combining an increment less sample learning algorithm;
and S143, combining the training network parameters of the basic source hospital set and the new hospital set, and updating the backbone network to obtain a case data model.
Further, the base source hospital set and the new hospital set are disjoint.
Further, the backbone network comprises a convolutional neural network and a long-short term memory network, processed data are input into the backbone network for training, and the category of the case data is output through the SOFTMAX layer.
Further, in the basic training stage, basic knowledge parameters obtained by training a basic hospital set are introduced into a memory regularizer, and the output of the memory regularizer is sent to the meta-learning stage.
Further, in the meta-training stage, a task sampling algorithm is carried out on the new hospital set to construct a support set and a query set; the support set is used for training a backbone network model to obtain a meta-parameter; and judging the task loss value of the query set by using the element parameters obtained by the training of the support set by using the query set, and performing gradient updating on the learning parameters of the trunk network according to the task loss value of the query set.
Further, the query set is reconstructed by using the basic source hospital set, and the reconstructed query set comprises samples in the new hospital set and samples from the basic source hospital set.
Further, extracting basic knowledge and mixing the basic knowledge and the new knowledge by using a memory regularizer; in the meta-training phase, learning meta-parameters to minimize joint prediction loss values on the query set; in the meta-learning stage, the meta-parameters of the meta-learner are optimized through iterative updating, and the meta-parameters are encapsulated through an attention mechanism; thereby generating a regularizer for the weights in the low-sample learning objective.
Further, a memory regularizer is used for loading the weight vector sum of the mixed basic knowledge parameters and the element parameters, and basic knowledge and new knowledge are extracted and mixed;
encoding the basic knowledge parameters based on an attention mechanism; the attention vector is used for calculating a memory matrix, and the memory matrix is used for storing the learning attractor vector of each basic source; in the meta-learning phase, for each task, meta-parameters are updated with an expected loss that minimizes the set of queries.
In another aspect, the present invention further provides a joint hospital data migration system based on few-sample model learning, including:
a processor, a memory;
the memory is to store processor-executable instructions;
the processor is configured to perform any of the joint hospital data migration methods based on low-sample model learning described above.
The beneficial effects of the technical scheme are as follows:
according to the invention, through learning of medical institutions (hospitals, nursing homes and the like) with rich data sources, a training frame is constructed by using a neural network, and a learning algorithm with few increment samples is combined to perform knowledge migration on the learned case data so as to complete assistance on target hospitals with few data sources, accurately migrate rich medical resource data to the target hospitals with sparse resources, and solve the problem of unbalanced development level of institutions.
The invention provides a novel method for carrying out knowledge transfer across a multi-source hospital set based on an increment and less sample learning algorithm. The memory regularizer is constructed to prevent the catastrophic forgetting problem. The task of carrying out knowledge transfer across multi-source medical institutions can be completed, and the problem of catastrophic forgetting in the training process of the source hospital is prevented to a certain extent.
The invention provides a memory adjustment structure for storing prior knowledge of a source hospital set, which is expanded to a target hospital sample to improve the data migration precision of the target hospital.
Drawings
FIG. 1 is a schematic flow chart of a method for migrating joint hospital data based on a few-sample model learning according to the present invention;
FIG. 2 is a flow chart of feature data processing in an embodiment of the present invention;
FIG. 3 is a flow chart of an improved training phase based on an incremental few-sample algorithm in an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described with reference to the accompanying drawings.
In this embodiment, referring to fig. 1, the present invention provides a joint hospital data migration method based on a few-sample model learning, including the steps of:
s10, constructing a case data model by using a source hospital set with rich data sources;
and S20, performing data identification on the target hospital data set with sparse data source by using the case data model to obtain data types, transferring the resource data of the types to a target hospital database, and transferring corresponding resource data.
In step S10, constructing a case data model using the source hospital set with rich data sources includes the steps of:
s11, constructing a task set from a source hospital set with rich data sources based on a task allocation algorithm;
s12, performing multiple times of case sampling on each sample in each task, and preprocessing the acquired case data to obtain a feature set;
s13, performing time sequence training on the feature set according to the time sequence, and extracting a feature value of a complete time sequence;
and S14, training the case data model by using the characteristic values to obtain the case data model.
As shown in fig. 2, a case data model is constructed by using a source hospital set with rich data sources, a task set is constructed from one or more source hospital sets with rich data sources based on a task allocation algorithm, each sample is subjected to multiple case sampling (first data, second data and third data) in each task, the collected data is preprocessed, and a feature set is obtained, wherein a first point of the feature set is an initial measurement starting point. And performing time sequence training according to a time sequence, extracting the characteristic value of the complete time sequence, training a case data model, and finally performing category identification on the case data of the target hospital by using the case data model with rich data sources to perform data source sparseness and transferring corresponding resource data. The resource data includes
As an optimization scheme of the above embodiment, as shown in fig. 3, in step S14, training a case data model with the feature values to obtain a case data model includes the steps of:
s141, source hospital set with rich data sources
Figure 296434DEST_PATH_IMAGE001
Divided into basic source hospital collection
Figure 548424DEST_PATH_IMAGE002
And new hospital collection
Figure 713826DEST_PATH_IMAGE003
Are independently based onBasic training and meta-training, i.e.
Figure 607832DEST_PATH_IMAGE004
Wherein the new hospital set is disjoint from the base source hospital set;
s142, a basic training stage, learning a basic learner and obtaining the parameters of the backbone network
Figure 440659DEST_PATH_IMAGE005
As basic prior knowledge; in the meta-training stage, network training parameters are obtained by training a new hospital set and combining an increment less sample learning algorithm;
and S143, combining the training network parameters of the basic source hospital set and the new hospital set, and updating the backbone network to obtain a case data model.
As an optimization scheme of the above embodiment, as shown in fig. 3, the backbone network includes a convolutional neural network CNN and a long-short term memory network LSTM (CNN + LSTM network architecture), and the processed data is input into the backbone network for training, and the data type is output through the SOFTMAX layer.
In the basic training stage, basic knowledge parameters obtained by training a basic hospital set are introduced into a memory regularizer, and the output of the memory regularizer is sent to the meta-learning stage.
In the meta-training stage, a task sampling algorithm is carried out on a new hospital set to construct a support set and a query set; the support set is used for training a backbone network model to obtain a meta-parameter; and judging the task loss value of the query set by using the element parameters obtained by the training of the support set by using the query set, and performing gradient updating on the learning parameters of the trunk network according to the task loss value of the query set.
Preferably, the query set is reconstructed using the base source hospital set, and the reconstructed query set includes samples from the new hospital set and samples from the base source hospital set.
A task set is constructed on a new hospital set using a task sampling strategy to learn a priori knowledge of the new tasks. Meta-learning utilization task distribution
Figure 66813DEST_PATH_IMAGE006
The shared structure between different tasks in the system, and the prior knowledge of the new task. Each task
Figure 781828DEST_PATH_IMAGE007
Segmenting sampled data points into support sets
Figure 682788DEST_PATH_IMAGE008
For training a model, query set, using a memory loader
Figure 838962DEST_PATH_IMAGE009
For measurement training.
As an optimization scheme of the above embodiment, a memory regularizer is used to extract basic knowledge and new knowledge for mixing; in the meta-training phase, learning meta-parameters to minimize joint prediction loss values on the query set; in the meta-learning stage, the meta-parameters of the meta-learner are optimized through iterative updating, and the meta-parameters are encapsulated through an attention mechanism; thereby generating a regularizer for the weights in the low-sample learning objective.
Under the condition of less samples, each new hospital set forms a support set
Figure 963913DEST_PATH_IMAGE010
And new query set
Figure 103908DEST_PATH_IMAGE011
Then the subtask can be obtained
Figure 545909DEST_PATH_IMAGE012
In this subtask, through training
Figure 884486DEST_PATH_IMAGE013
The resulting learnable parameters are referred to as weights. Then to better verify the learning capabilities of the model, a query set is reconstructed that includes not only the samples from the new hospital set, but also samples from the underlying source hospital set
Figure 383601DEST_PATH_IMAGE014
I.e. by
Figure 10891DEST_PATH_IMAGE015
. For each task, the learnable parameters are calculated
Figure 988075DEST_PATH_IMAGE016
The loss in the training process is used to perform the gradient update.
As an optimization scheme of the above embodiment, a memory regularizer is used to load a weight vector sum of the mixed basic knowledge parameters and the meta parameters, so as to complete extraction of the basic knowledge and mixing of the new knowledge.
A mixture of the base knowledge and the new knowledge is extracted using a memory regularizer. In the meta-training phase, meta-parameters are learned to minimize
Figure 384421DEST_PATH_IMAGE017
The loss is jointly predicted. Using memory regularizers
Figure 851174DEST_PATH_IMAGE018
So as to pass through the minimization of loss
Figure 700182DEST_PATH_IMAGE019
To learn the weights, wherein
Figure 481056DEST_PATH_IMAGE020
Is the root mean square error of the received signal,
Figure 794225DEST_PATH_IMAGE021
is a neural network parameter in the meta-training phase. Optimizing meta-parameters of a meta-learner by iterative updating during a meta-learning phase
Figure 369563DEST_PATH_IMAGE022
The meta-parameters are encapsulated by an attention mechanism, thereby generating a regularizer for fast weights in a few-sample learning objective.
Encoding the basic knowledge parameters based on an attention mechanism; the attention vector is used for calculating a memory matrix, and the memory matrix is used for storing the learning attractor vector of each basic source; in the meta-learning phase, for each task, meta-parameters are updated with an expected loss that minimizes the set of queries.
The encoding of the underlying information (knowledge) begins based on an attention mechanism, wherein the underlying knowledge is trained by the underlying network on the basis of the underlying parameters derived from the underlying source hospital set. The attention vector is used to calculate a memory matrix, which is used to store the learning attractor vector for each base source mechanism.
During meta-learning, for each task, to minimize the set of queries
Figure 971446DEST_PATH_IMAGE023
To update the parameters
Figure 90099DEST_PATH_IMAGE024
The query set contains a base class and a new class:
Figure 257775DEST_PATH_IMAGE025
wherein:
Figure 4014DEST_PATH_IMAGE026
aim at supporting sets
Figure 624352DEST_PATH_IMAGE027
And updating the parameters
Figure 12608DEST_PATH_IMAGE028
Based on the data of the query set in the calculation single task
Figure 972473DEST_PATH_IMAGE029
As a function of the overall loss of the task,
Figure 951931DEST_PATH_IMAGE030
is the root mean square error of the received signal,
Figure 528405DEST_PATH_IMAGE031
in order to query the collection in a single task,
Figure 189194DEST_PATH_IMAGE032
for the neural network parameters in the meta-training phase, the meta-parameters of the meta-learner are updated by internal iteration (
Figure 269145DEST_PATH_IMAGE033
) The optimization is carried out, and the optimization is carried out,
Figure 685083DEST_PATH_IMAGE034
to minimize the optimal parameters for the regularization objective, this is exactly knowledge of the spatiotemporal associations of the encryption.
Under the condition of less samples, each new hospital set forms a support set
Figure 217696DEST_PATH_IMAGE035
And new query set
Figure 962403DEST_PATH_IMAGE036
And finally, selecting the value with the highest evaluation score ranking as the recognition result through a normalization layer.
As an embodiment of the foregoing solution, in step S20, performing data identification on a target hospital data set with a sparse data source by using the case data model to obtain a data category, and migrating the category resource data to a target hospital database, including the steps of:
establishing a constructed target task set according to a target hospital data set, performing feature extraction on the constructed target task set, inputting a trained case data model for training, judging the category of target feature data based on training data, thereby obtaining the category of case data, and transferring corresponding resource data.
The similarity of the distribution among different hospital data sets of the invention verifies that the spatial function is globally shared. Transfer learning is an effective way to solve the problem of data insufficiency by exploiting knowledge from those hospitals where data is abundant. While previous work has made a compelling breakthrough in the spatio-temporal prediction problem, the existing methods present at least three challenges: (1) early methods only transferred knowledge from a single source hospital, resulting in unstable results and negative migration risks. (ii) Due to missing values or the effects of special events (such as holidays), existing methods are difficult to learn efficiently in situations where data is limited. (iii) Meta-learning methods, such as MAML, are quickly adapted according to new information, but old knowledge is quickly forgotten.
In order to solve the practical problems, an incremental few-sample learning algorithm is provided, so that the backbone network can transfer knowledge from a plurality of nodes. Based on the insight of incremental learning, the motivation of the invention is to pursue new knowledge from new-source hospitals and fuse it with a priori knowledge obtained from past experience to prevent the catastrophic forgetting problem. Unlike previous research, the invention aims to use an incremental sample-less learner to try to establish a generalized model, which not only can transfer learned knowledge from a source hospital to improve the accuracy of the target hospital under limited data, but also can prevent the catastrophic forgetting problem in the training process of the source hospital. Specifically, starting from a spatiotemporal network structure with learnable parameters, a setting method for incremental shot-less learning is defined. In addition, a storage regularization method is proposed to store a priori knowledge of the source hospital data set and extend it to the target hospital to improve the migration accuracy of the target hospital. Specifically, unlike the prior art direct output, in the task, a classifier network that is randomly initialized and solved until convergence is learned directly. Since the model cannot see the base class data in the support set of each task learning set, it is challenging to learn a classifier that classifies the base class and the new class simultaneously. To this end, a learning regularizer is added, which is predicted by the meta-network attention attractor network. The network learns by differentiating the shot-less learning optimization iterations.
In order to cooperate with the realization of the method of the invention, based on the same invention concept, the invention also provides a joint hospital data migration system based on the less sample model learning, which comprises the following steps:
a processor, a memory;
the memory is to store processor-executable instructions;
the processor is configured to perform any of the above-described joint hospital data migration based on low-sample model learning.
The foregoing shows and describes the general principles and broad features of the present invention and advantages thereof. It will be understood by those skilled in the art that the present invention is not limited to the embodiments described above, which are described in the specification and illustrated only to illustrate the principle of the present invention, but that various changes and modifications may be made therein without departing from the spirit and scope of the present invention, which fall within the scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (10)

1. A joint hospital data migration method based on few-sample model learning is characterized by comprising the following steps:
s10, constructing a case data model by using the source hospital set with rich data sources, comprising the following steps:
s11, constructing a task set from a source hospital set with rich data sources based on a task allocation algorithm;
s12, performing multiple times of case sampling on each sample in each task, and preprocessing the acquired case data to obtain a feature set;
s13, performing time sequence training on the feature set according to the time sequence, and extracting a feature value of a complete time sequence;
s14, training a case data model by the characteristic value to obtain a case data model;
and S20, performing data identification on the target hospital data set with sparse data source by using the case data model to obtain data types, and transferring the resource data of the types to a target hospital database.
2. The method for migrating combined hospital data based on model learning with few samples as claimed in claim 1, wherein in said step S14, a case data model is obtained by training the case data model with said feature values, comprising the steps of:
s141, dividing a source hospital set with rich data sources into a basic source hospital set and a new hospital set, and respectively performing basic training and meta-training;
s142, in the basic training stage, learning a basic learner and obtaining a backbone network parameter as basic prior knowledge; in the meta-training stage, network training parameters are obtained by training a new hospital set and combining an increment less sample learning algorithm;
and S143, combining the training network parameters of the basic source hospital set and the new hospital set, and updating the backbone network to obtain a case data model.
3. The joint hospital data migration method based on few-sample model learning as claimed in claim 2, wherein the base source hospital set and the new hospital set are disjoint.
4. The method as claimed in claim 2, wherein the backbone network comprises a convolutional neural network and a long-short term memory network, the processed data is input into the backbone network for training, and the category of the case data is output through a SOFTMAX layer.
5. The joint hospital data migration method based on the few-sample model learning as claimed in claim 2, characterized in that in the basic training phase, basic knowledge parameters obtained by training a basic hospital set are introduced into the memory regularizer, and the output of the memory regularizer is sent to the meta-learning phase.
6. The joint hospital data migration method based on the few-sample model learning of claim 5 is characterized in that in the meta-training phase, a task sampling algorithm is performed on a new hospital set to construct a support set and a query set; the support set is used for training a backbone network model to obtain a meta-parameter; and judging the task loss value of the query set by using the element parameters obtained by the training of the support set by using the query set, and performing gradient updating on the learning parameters of the trunk network according to the task loss value of the query set.
7. The method of claim 6, wherein the query set is reconstructed using the base source hospital set, and the reconstructed query set comprises samples from the new hospital set and samples from the base source hospital set.
8. The joint hospital data migration method based on few-sample model learning according to claim 6, characterized in that a memory regularizer is used to extract basic knowledge and new knowledge to mix; in the meta-training phase, learning meta-parameters to minimize joint prediction loss values on the query set; in the meta-learning stage, the meta-parameters of the meta-learner are optimized through iterative updating, and the meta-parameters are encapsulated through an attention mechanism; thereby generating a regularizer for the weights in the low-sample learning objective.
9. The joint hospital data migration method based on few-sample model learning according to claim 8, characterized in that a memory regularizer is used to load the weight vector sum of the mixed basic knowledge parameters and the meta parameters, so as to complete the extraction of the basic knowledge and the mixing of the new knowledge;
encoding the basic knowledge parameters based on an attention mechanism; the attention vector is used for calculating a memory matrix, and the memory matrix is used for storing the learning attractor vector of each basic source; in the meta-learning phase, for each task, meta-parameters are updated with an expected loss that minimizes the set of queries.
10. A federated hospital data migration system based on few-sample model learning, comprising:
a processor, a memory;
the memory is to store processor-executable instructions;
the processor is configured to perform the joint hospital data migration method based on low-sample model learning of any one of claims 1-9.
CN202111637599.6A 2021-12-30 2021-12-30 Joint hospital data migration method and system based on few-sample model learning Active CN113986890B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111637599.6A CN113986890B (en) 2021-12-30 2021-12-30 Joint hospital data migration method and system based on few-sample model learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111637599.6A CN113986890B (en) 2021-12-30 2021-12-30 Joint hospital data migration method and system based on few-sample model learning

Publications (2)

Publication Number Publication Date
CN113986890A true CN113986890A (en) 2022-01-28
CN113986890B CN113986890B (en) 2022-03-11

Family

ID=79734907

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111637599.6A Active CN113986890B (en) 2021-12-30 2021-12-30 Joint hospital data migration method and system based on few-sample model learning

Country Status (1)

Country Link
CN (1) CN113986890B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140322674A1 (en) * 2013-04-25 2014-10-30 Elbit Systems Ltd. Methods and systems for managing a training arena for training an operator of a host vehicle
CN108447533A (en) * 2018-03-29 2018-08-24 江苏远燕医疗设备有限公司 A kind of Multifunctional smart medical system
CN110444263A (en) * 2019-08-21 2019-11-12 深圳前海微众银行股份有限公司 Disease data processing method, device, equipment and medium based on federation's study
CN111180061A (en) * 2019-12-09 2020-05-19 广东工业大学 Intelligent auxiliary diagnosis system fusing block chain and federal learning shared medical data
CN111724083A (en) * 2020-07-21 2020-09-29 腾讯科技(深圳)有限公司 Training method and device for financial risk recognition model, computer equipment and medium
US20210098080A1 (en) * 2019-09-30 2021-04-01 Siemens Healthcare Gmbh Intra-hospital genetic profile similar search
US20210104313A1 (en) * 2018-06-15 2021-04-08 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
CN113113124A (en) * 2021-04-30 2021-07-13 南通市第一人民医院 Neurosurgical multi-patient integrated nursing method and system
CN113407522A (en) * 2021-06-18 2021-09-17 上海市第十人民医院 Data processing method and device, computer equipment and computer readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140322674A1 (en) * 2013-04-25 2014-10-30 Elbit Systems Ltd. Methods and systems for managing a training arena for training an operator of a host vehicle
CN108447533A (en) * 2018-03-29 2018-08-24 江苏远燕医疗设备有限公司 A kind of Multifunctional smart medical system
US20210104313A1 (en) * 2018-06-15 2021-04-08 Canon Kabushiki Kaisha Medical image processing apparatus, medical image processing method and computer-readable medium
CN110444263A (en) * 2019-08-21 2019-11-12 深圳前海微众银行股份有限公司 Disease data processing method, device, equipment and medium based on federation's study
US20210098080A1 (en) * 2019-09-30 2021-04-01 Siemens Healthcare Gmbh Intra-hospital genetic profile similar search
CN111180061A (en) * 2019-12-09 2020-05-19 广东工业大学 Intelligent auxiliary diagnosis system fusing block chain and federal learning shared medical data
CN111724083A (en) * 2020-07-21 2020-09-29 腾讯科技(深圳)有限公司 Training method and device for financial risk recognition model, computer equipment and medium
CN113113124A (en) * 2021-04-30 2021-07-13 南通市第一人民医院 Neurosurgical multi-patient integrated nursing method and system
CN113407522A (en) * 2021-06-18 2021-09-17 上海市第十人民医院 Data processing method and device, computer equipment and computer readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TOBIAS METTLER 等: "HCMM - a maturity model for measuring and assessing the quality of cooperation between and within hospitals", 《2012 25TH IEEE INTERNATIONAL SYMPOSIUM ON COMPUTER-BASED MEDICAL SYSTEMS (CBMS)》 *
廖湘庆: "基于云平台的公立医院服务模式创新研究", 《中国博士学位论文全文数据库 医药卫生科技辑》 *
顾东晓: "基于案例库的诊疗决策支持技术研究", 《中国博士学位论文全文数据库 医药卫生科技辑》 *

Also Published As

Publication number Publication date
CN113986890B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN110674323B (en) Unsupervised cross-modal Hash retrieval method and system based on virtual label regression
CN107766555B (en) Image retrieval method based on soft-constraint unsupervised cross-modal hashing
CN111149117A (en) Gradient-based automatic adjustment of machine learning and deep learning models
CN108108762B (en) Nuclear extreme learning machine for coronary heart disease data and random forest classification method
CN111127364B (en) Image data enhancement strategy selection method and face recognition image data enhancement method
CN114067160A (en) Small sample remote sensing image scene classification method based on embedded smooth graph neural network
WO2023134062A1 (en) Artificial intelligence-based drug-target interaction relationship determination method and apparatus
He et al. Parallel sampling from big data with uncertainty distribution
CN110163262A (en) Model training method, method for processing business, device, terminal and storage medium
CN107341210A (en) C DBSCAN K clustering algorithms under Hadoop platform
Wu et al. AutoCTS+: Joint neural architecture and hyperparameter search for correlated time series forecasting
CN114463596A (en) Small sample image identification method, device and equipment of hypergraph neural network
CN113986890B (en) Joint hospital data migration method and system based on few-sample model learning
Kokilambal Intelligent content based image retrieval model using adadelta optimized residual network
CN116978450A (en) Protein data processing method, device, electronic equipment and storage medium
CN116226404A (en) Knowledge graph construction method and knowledge graph system for intestinal-brain axis
CN115617945A (en) Cross-modal data retrieval model establishing method and cross-modal data retrieval method
CN115168326A (en) Hadoop big data platform distributed energy data cleaning method and system
CN114821140A (en) Image clustering method based on Manhattan distance, terminal device and storage medium
Lee et al. Development of a simulation result management and prediction system using machine learning techniques
Han et al. Tensor based relations ranking for multi-relational collective classification
US20240111807A1 (en) Embedding and Analyzing Multivariate Information in Graph Structures
Tian Transfer learning based recognition algorithm for common tea disease
Rampone et al. A proposal for advanced services and data processing aiming at the territorial intelligence development
Wang et al. A rough-set based measurement for the membership degree of fuzzy C-means algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Zhu Jun

Inventor after: Xiang Ping

Inventor after: Bao Xiaole

Inventor before: Wang Jiahao

Inventor before: Qi Xiuxiu

Inventor before: Li Wenxiong

Inventor before: Chen Dajiang

Inventor before: Zhu Jun

Inventor before: Xiang Ping

Inventor before: Bao Xiaole

CB03 Change of inventor or designer information