CN112836485A - Similar medical record prediction method based on neural machine translation - Google Patents

Similar medical record prediction method based on neural machine translation Download PDF

Info

Publication number
CN112836485A
CN112836485A CN202110096212.4A CN202110096212A CN112836485A CN 112836485 A CN112836485 A CN 112836485A CN 202110096212 A CN202110096212 A CN 202110096212A CN 112836485 A CN112836485 A CN 112836485A
Authority
CN
China
Prior art keywords
output
layer
decoder
vector
attention
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110096212.4A
Other languages
Chinese (zh)
Other versions
CN112836485B (en
Inventor
李宇栋
任江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202110096212.4A priority Critical patent/CN112836485B/en
Publication of CN112836485A publication Critical patent/CN112836485A/en
Application granted granted Critical
Publication of CN112836485B publication Critical patent/CN112836485B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Data Mining & Analysis (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The invention relates to a similar medical record prediction method based on neural machine translation, which comprises the following steps: s1: the relevant text information in the electronic medical record is used as the input of a coding network; s2: initializing parameters in the coding network, and vectorizing relevant text information in the electronic medical record; s3: averaging the output vectors of each step in the coding network as the initial cell state in the decoder; s4: inputting the output vector of each step obtained by the coding network into the attention, and then inputting the output of each step of the decoder into the attention to obtain the output of the attention as the hidden layer vector of the decoder; s5: finally, decoding the hidden layer vector of the decoder to obtain the serial number of the output similar medical record; the feature vector representation of the electronic medical record text information can be learned by using a coding network and initialized parameters, the output of a coder is transmitted to an attention layer, a decoder is initialized after averaging, and similar medical records can be accurately predicted.

Description

Similar medical record prediction method based on neural machine translation
Technical Field
The invention relates to the field of deep learning, in particular to a similar medical record prediction method based on neural machine translation.
Background
The electronic medical record is relevant information and flow used in the process of informatization patient diagnosis, and can provide reference for medical health practitioners such as doctors and the like in diagnosis of relevant diseases.
The similar electronic medical records have great auxiliary effect on the diagnosis of doctors. Due to the fact that the text of the electronic medical record is not standard and the symptoms of the diseases of the patient are different, text information of the same diseases on the electronic medical record may have a large difference, and therefore the nonstandard characteristics of the text of the electronic medical record generally exist. Different diseases may have similar or even identical symptoms, which increases the difficulty of predicting and retrieving similar medical records. Therefore, similar medical records obtained based on traditional clustering, text retrieval and other methods have low similarity between medical records and poor accuracy of results. And the traditional similarity medical record retrieval and prediction method lacks a means for effectively analyzing and processing a large amount of data; as can be seen from the above, the existing prediction and retrieval of the similarity medical record has the defect of low accuracy.
In the prior art, chinese invention patent CN103678285A discloses a "machine translation method and a machine translation system", which is disclosed as 26/03/2014, and discloses a machine translation method and a machine translation system, wherein the machine translation method includes: respectively translating the original text of the source language into a target language by utilizing a plurality of machine translation devices to obtain a plurality of candidate translations; respectively calculating language model scores aiming at the candidate translations by using a language model; respectively obtaining device scores given by a plurality of machine translation devices and related to a plurality of candidate translations; calculating length scores for the plurality of candidate translations respectively based on the length of the original text and the length of the candidate translations; respectively calculating total scores of the candidate translations based on at least one of the language model score, the equipment score and the length score; and selecting the candidate translation with the highest total score as the result of the machine translation. In the scheme, the language model established by the machine translation system is subjected to score summarization, and unlike the specific scheme of the application, the application is subjected to processing and output through an encoder and a decoder.
Disclosure of Invention
The invention provides a similar medical record prediction method based on neural machine translation, aiming at solving the technical defect that the existing prediction and retrieval of similar medical records has low accuracy.
In order to realize the purpose, the technical scheme is as follows:
a similar medical record prediction method based on neural machine translation comprises the following steps:
s1: the relevant text information in the electronic medical record is used as the input of a coding network;
s2: initializing parameters in the coding network, and vectorizing relevant text information in the electronic medical record;
s3: averaging the output vectors of each step in the coding network as the initial cell state in the decoder;
s4: inputting the output vector of each step obtained by the coding network into the attention, and then inputting the output of each step of the decoder into the attention to obtain the output of the attention as the hidden layer vector of the decoder;
s5: and decoding the vector through an implicit layer of a decoder to obtain the number of the output similar medical record.
In the scheme, the feature vector representation of the electronic medical record text information can be learned by using a coding network and initialized parameters, the output of a coder is transmitted to an attribute layer, a decoder is initialized after averaging is carried out, and the similar medical records can be accurately predicted.
In step S1, the relevant text information is the context information and the position information of the corresponding input word.
In step S2, the coding network includes a first embedding layer and 12 transform sublayers, and the coding network maps words of text information in the input medical record into vectors to be coded through the first embedding layer, and then performs feature extraction on the vectors to be coded through the 12 transform sublayers, to obtain features of the text information of the input medical record, which are used as input and initialization vectors of the decoder.
The coding network adopts an encoder in an embedded layer and a Transformer sublayer, the output vector of each Transformer sublayer is output to the next Transformer sublayer, the output of the encoder is the output of the last Transformer sublayer, and the output vector of the last Transformer sublayer is used as the characteristic representation of an input medical record and is used for the input and initialization of an LSTM layer in a decoder.
The Transformer sublayer comprises a multi-head attention layer, a full connection layer and a residual connection layer; the input vector is input into the multi-head attention layer, the output vector of the transform sub-layer is obtained after calculation, the output vector is added with the input vector of the residual connecting layer to obtain the output of the residual connecting layer, the output of the residual connecting layer is transmitted to the full connecting layer to obtain the output of the full connecting layer, and the output of the residual connecting layer is added with the output of the last residual connecting layer to obtain the output of the residual connecting layer to be used as the output vector of the transform sub-layer.
The Transformer sublayer is a bidirectional Transformer sublayer.
In step S3, the initial cell states of the initial hidden layers are all 0.
In step S4, the decoder decodes using the long-short term memory layer based on the Attention mechanism.
Each decoding step receives the hidden layer state and the cell state of the previous step, and combines the input of the previous step to generate a new cell state and a new hidden layer state; and the long-short term memory layer calculates the attention vector for each step output by the encoder at each step of decoding, and the final output of each step of the long-short term memory layer decoder is obtained as a new hidden layer state based on the attention vector and the output of the step.
In step S5, the output mode of the decoder is:
Figure BDA0002914080980000031
Figure BDA0002914080980000032
Figure BDA0002914080980000033
at=f(ct,ht)=tanh(Wc[ct;ht])
htand
Figure BDA0002914080980000034
respectively referring to the hidden layer state of the decoder and the output vector of each step of the encoder; alpha is alphatsHiding attention weights of layer states for an output vector of each step of an encoder for a decoder;
Figure BDA0002914080980000035
a function that scores the decoder hidden layer state and the output vector of each step of the encoder; c. CtAligning a vector for a context corresponding to a decoder hidden layer state; a istThe calculated attention vector, namely the output of the attention layer; t represents the decoding step of the decoder, i.e. time t; both s and s 'represent the variables of the encoder's time instant.
In the scheme, the feature vector representation of the electronic medical record text information can be well learned by adopting the encoder structure and the pre-training parameters of the Transformer, so that the output of the encoder is output to an attribute layer, a decoder is initialized after averaging, similar medical records can be accurately predicted, the accuracy of 76.28 percent is obtained, and whether the predicted medical records are the same as the main diagnosis of the input medical records is taken as a measurement standard.
The method outputs the serial numbers of the similar medical records in the medical record library, and after an input medical record is trained by the training data, the similar medical record can be directly output for a new input medical record without other steps, so that the method is an end-to-end similar medical record prediction method.
Compared with the prior art, the invention has the beneficial effects that:
according to the similar medical record prediction method based on the neural machine translation, the feature vector representation of the electronic medical record text information can be learned by using the encoding network and the initialized parameters, the output of the encoder is transmitted to the attention layer, the decoder is initialized after the average value is calculated, and the similar medical record can be accurately predicted.
Drawings
FIG. 1 is a flow chart of a method of the present invention;
FIG. 2 is a data flow diagram of the coding network of the present invention;
FIG. 3 is a data flow diagram of the Transformer structure of the present invention;
fig. 4 is a data flow diagram of the decoder of the present invention.
Detailed Description
The drawings are for illustrative purposes only and are not to be construed as limiting the patent;
the invention is further illustrated below with reference to the figures and examples.
Example 1
As shown in fig. 1, fig. 2, fig. 3 and fig. 4, a method for predicting similar medical records based on neural machine translation includes the following steps:
s1: the relevant text information in the electronic medical record is used as the input of a coding network;
s2: initializing parameters in the coding network, and vectorizing relevant text information in the electronic medical record;
s3: averaging the output vectors of each step in the coding network as the initial cell state in the decoder;
s4: inputting the output vector of each step obtained by the coding network into the attention, and then inputting the output of each step of the decoder into the attention to obtain the output of the attention as the hidden layer vector of the decoder;
s5: and decoding the vector through an implicit layer of a decoder to obtain the number of the output similar medical record.
In the scheme, the feature vector representation of the electronic medical record text information can be learned by using a coding network and initialized parameters, the output of a coder is transmitted to an attribute layer, a decoder is initialized after averaging is carried out, and the similar medical records can be accurately predicted.
In step S1, the relevant text information is the context information and the position information of the corresponding input word.
In step S2, the coding network includes a first embedding layer and 12 transform sublayers, and the coding network maps words of text information in the input medical record into vectors to be coded through the first embedding layer, and then performs feature extraction on the vectors to be coded through the 12 transform sublayers, to obtain features of the text information of the input medical record, which are used as input and initialization vectors of the decoder.
The coding network adopts an encoder in an embedded layer and a Transformer sublayer, the output vector of each Transformer sublayer is output to the next Transformer sublayer, the output of the encoder is the output of the last Transformer sublayer, and the output vector of the last Transformer sublayer is used as the characteristic representation of an input medical record and is used for the input and initialization of an LSTM layer in a decoder.
The Transformer sublayer comprises a multi-head attention layer, a full connection layer and a residual connection layer; the input vector is input into the multi-head attention layer, the output vector of the transform sub-layer is obtained after calculation, the output vector is added with the input vector of the residual connecting layer to obtain the output of the residual connecting layer, the output of the residual connecting layer is transmitted to the full connecting layer to obtain the output of the full connecting layer, and the output of the residual connecting layer is added with the output of the last residual connecting layer to obtain the output of the residual connecting layer to be used as the output vector of the transform sub-layer.
The Transformer sublayer is a bidirectional Transformer sublayer.
In step S3, the initial cell states of the initial hidden layers are all 0.
In step S4, the decoder decodes using the long-short term memory layer based on the Attention mechanism.
Each decoding step receives the hidden layer state and the cell state of the previous step, and combines the input of the previous step to generate a new cell state and a new hidden layer state; and the long-short term memory layer calculates the attention vector for each step output by the encoder at each step of decoding, and the final output of each step of the long-short term memory layer decoder is obtained as a new hidden layer state based on the attention vector and the output of the step.
In step S5, the output mode of the decoder is:
Figure BDA0002914080980000051
Figure BDA0002914080980000052
Figure BDA0002914080980000053
at=f(ct,ht)=tanh(Wc[ct;ht])
htand
Figure BDA0002914080980000054
respectively referring to the hidden layer state of the decoder and the output vector of each step of the encoder; alpha is alphatsHiding attention weights of layer states for an output vector of each step of an encoder for a decoder;
Figure BDA0002914080980000055
a function that scores the decoder hidden layer state and the output vector of each step of the encoder; c. CtAligning a vector for a context corresponding to a decoder hidden layer state; a istThe calculated attention vector, namely the output of the attention layer; t represents the decoding step of the decoder, i.e. time t; both s and s 'represent the variables of the encoder's time instant.
Example 2
By adopting the encoder structure and the pre-training parameters of the Transformer, the feature vector representation of the text information of the electronic medical record can be well learned, so that the output of the encoder is output to an attention layer, the decoder is initialized after the average value is calculated, similar medical records can be accurately predicted, the accuracy rate of 76.28 percent is obtained, and whether the predicted medical records are the same as the main diagnosis of the input medical records is taken as a measurement standard.
The method outputs the serial numbers of the similar medical records in the medical record library, and after an input medical record is trained by the training data, the similar medical record can be directly output for a new input medical record without other steps, so that the method is an end-to-end similar medical record prediction method.
It should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. A similar medical record prediction method based on neural machine translation is characterized by comprising the following steps:
s1: the relevant text information in the electronic medical record is used as the input of a coding network;
s2: initializing parameters in the coding network, and vectorizing relevant text information in the electronic medical record;
s3: averaging the output vectors of each step in the coding network as the initial cell state in the decoder;
s4: inputting the output vector of each step obtained by the coding network into the attention, and then inputting the output of each step of the decoder into the attention to obtain the output of the attention as the hidden layer vector of the decoder;
s5: and decoding the vector through an implicit layer of a decoder to obtain the number of the output similar medical record.
2. The method of claim 1, wherein in step S1, the related text information is context information and location information of the corresponding input word.
3. The method as claimed in claim 2, wherein in step S2, the coding network includes a first embedding layer and 12 transform sublayers, and the coding network maps words of text information in the input medical record into vectors to be coded through the first embedding layer, and then performs feature extraction on the vectors to be coded through the 12 transform sublayers to obtain features of the text information of the input medical record as input and initialization vectors of the decoder.
4. The method of claim 3, wherein the coding network employs an encoder embedded in the layers and the transform sublayers, the output vector of each transform sublayer is output to the next transform sublayer, the output of the encoder is the output of the last transform sublayer, and the output vector of the last transform sublayer is used as the characteristic representation of the input medical record for input and initialization of the LSTM layer in the decoder.
5. The method of claim 4, wherein the transform sublayer comprises a multi-head attention layer, a full connection layer and a residual connection layer; the input vector is input into the multi-head attention layer, the output vector of the transform sub-layer is obtained after calculation, the output vector is added with the input vector of the residual connecting layer to obtain the output of the residual connecting layer, the output of the residual connecting layer is transmitted to the full connecting layer to obtain the output of the full connecting layer, and the output of the residual connecting layer is added with the output of the last residual connecting layer to obtain the output of the residual connecting layer to be used as the output vector of the transform sub-layer.
6. The method of claim 5, wherein the Transformer sublayer is a bidirectional Transformer sublayer.
7. The method of claim 6, wherein in step S3, the initial hidden layers have an initial cell state of 0.
8. The method of claim 7, wherein in step S4, the decoder uses a long-short term memory layer based on an Attention mechanism for decoding.
9. The method of claim 8, wherein each decoding step receives hidden layer state and cell state of the previous step, and combines the input of the previous step to generate new cell state and hidden layer state; and the long-short term memory layer calculates the attention vector for each step output by the encoder at each step of decoding, and the final output of each step of the long-short term memory layer decoder is obtained as a new hidden layer state based on the attention vector and the output of the step.
10. The method of claim 9, wherein in step S5, the decoder outputs the following output modes:
Figure FDA0002914080970000021
Figure FDA0002914080970000022
Figure FDA0002914080970000023
at=f(ct,ht)=tanh(Wc[ct;ht])
htand
Figure FDA0002914080970000024
respectively referring to the hidden layer state of the decoder and the output vector of each step of the encoder; alpha is alphatsHiding attention weights of layer states for an output vector of each step of an encoder for a decoder;
Figure FDA0002914080970000025
a function that scores the decoder hidden layer state and the output vector of each step of the encoder; c. CtAligning a vector for a context corresponding to a decoder hidden layer state; a istThe calculated attention vector, namely the output of the attention layer; t represents the decoding step of the decoder, i.e. time t; both s and s 'represent the variables of the encoder's time instant.
CN202110096212.4A 2021-01-25 2021-01-25 Similar medical record prediction method based on neural machine translation Active CN112836485B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110096212.4A CN112836485B (en) 2021-01-25 2021-01-25 Similar medical record prediction method based on neural machine translation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110096212.4A CN112836485B (en) 2021-01-25 2021-01-25 Similar medical record prediction method based on neural machine translation

Publications (2)

Publication Number Publication Date
CN112836485A true CN112836485A (en) 2021-05-25
CN112836485B CN112836485B (en) 2023-09-19

Family

ID=75930805

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110096212.4A Active CN112836485B (en) 2021-01-25 2021-01-25 Similar medical record prediction method based on neural machine translation

Country Status (1)

Country Link
CN (1) CN112836485B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688822A (en) * 2021-09-07 2021-11-23 河南工业大学 Time sequence attention mechanism scene image identification method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201715516D0 (en) * 2016-09-26 2017-11-08 Google Inc Neural machine translation systems
US20180300317A1 (en) * 2017-04-14 2018-10-18 Salesforce.Com, Inc. Neural machine translation with latent tree attention
CN110795556A (en) * 2019-11-01 2020-02-14 中山大学 Abstract generation method based on fine-grained plug-in decoding
CN111178093A (en) * 2019-12-20 2020-05-19 沈阳雅译网络技术有限公司 Neural machine translation system training acceleration method based on stacking algorithm
KR20200063281A (en) * 2018-11-16 2020-06-05 한국전자통신연구원 Apparatus for generating Neural Machine Translation model and method thereof
CN111428519A (en) * 2020-03-06 2020-07-17 中国科学院计算技术研究所 Entropy-based neural machine translation dynamic decoding method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201715516D0 (en) * 2016-09-26 2017-11-08 Google Inc Neural machine translation systems
US20180300317A1 (en) * 2017-04-14 2018-10-18 Salesforce.Com, Inc. Neural machine translation with latent tree attention
KR20200063281A (en) * 2018-11-16 2020-06-05 한국전자통신연구원 Apparatus for generating Neural Machine Translation model and method thereof
CN110795556A (en) * 2019-11-01 2020-02-14 中山大学 Abstract generation method based on fine-grained plug-in decoding
CN111178093A (en) * 2019-12-20 2020-05-19 沈阳雅译网络技术有限公司 Neural machine translation system training acceleration method based on stacking algorithm
CN111428519A (en) * 2020-03-06 2020-07-17 中国科学院计算技术研究所 Entropy-based neural machine translation dynamic decoding method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
曲昭伟;王源;王晓茹;: "基于迁移学习的分层注意力网络情感分析算法", 计算机应用, no. 11, pages 3053 - 3056 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688822A (en) * 2021-09-07 2021-11-23 河南工业大学 Time sequence attention mechanism scene image identification method

Also Published As

Publication number Publication date
CN112836485B (en) 2023-09-19

Similar Documents

Publication Publication Date Title
CN109471895B (en) Electronic medical record phenotype extraction and phenotype name normalization method and system
JP7195365B2 (en) A Method for Training Convolutional Neural Networks for Image Recognition Using Image Conditional Mask Language Modeling
CN109947912A (en) A kind of model method based on paragraph internal reasoning and combined problem answer matches
CN110781683A (en) Entity relation joint extraction method
CN108960063B (en) Multi-event natural language description method in video facing event relation coding
CN111597830A (en) Multi-modal machine learning-based translation method, device, equipment and storage medium
CN111737975A (en) Text connotation quality evaluation method, device, equipment and storage medium
CN111444367B (en) Image title generation method based on global and local attention mechanism
CN113902964A (en) Multi-mode attention video question-answering method and system based on keyword perception
CN111460824A (en) Unmarked named entity identification method based on anti-migration learning
CN113423004B (en) Video subtitle generating method and system based on decoupling decoding
CN113486667A (en) Medical entity relationship joint extraction method based on entity type information
CN112687328B (en) Method, apparatus and medium for determining phenotypic information of clinical descriptive information
CN112687388A (en) Interpretable intelligent medical auxiliary diagnosis system based on text retrieval
CN116610778A (en) Bidirectional image-text matching method based on cross-modal global and local attention mechanism
CN114417839A (en) Entity relation joint extraction method based on global pointer network
CN110084297A (en) A kind of image semanteme alignment structures towards small sample
CN111145914B (en) Method and device for determining text entity of lung cancer clinical disease seed bank
CN114564959A (en) Method and system for identifying fine-grained named entities of Chinese clinical phenotype
Sun et al. Study on medical image report generation based on improved encoding-decoding method
CN112836485B (en) Similar medical record prediction method based on neural machine translation
CN114359656A (en) Melanoma image identification method based on self-supervision contrast learning and storage device
CN113204978A (en) Machine translation enhancement training method and system
CN112668481A (en) Semantic extraction method for remote sensing image
CN115438220A (en) Cross-language and cross-modal retrieval method and device for noise robust learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant