CN110265098A - A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing - Google Patents

A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing Download PDF

Info

Publication number
CN110265098A
CN110265098A CN201910374245.3A CN201910374245A CN110265098A CN 110265098 A CN110265098 A CN 110265098A CN 201910374245 A CN201910374245 A CN 201910374245A CN 110265098 A CN110265098 A CN 110265098A
Authority
CN
China
Prior art keywords
entity
retrieval
case
standard
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910374245.3A
Other languages
Chinese (zh)
Inventor
顾大中
曹灵宇
丁佳佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910374245.3A priority Critical patent/CN110265098A/en
Publication of CN110265098A publication Critical patent/CN110265098A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/60ICT specially adapted for the handling or processing of patient-related medical or healthcare data for patient-specific data, e.g. for electronic patient records

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing, based on artificial intelligence field, it include: that case is filed step, by naming entity to replace with title filing in the text information of the unstructured part of case, and standard case is made in case;Case searching step, identification retrieves the retrieval in information and names entity, and names entity set standard to name entity according to the retrieval, names entity search criteria case by standard.This invention removes the risks of the key message missing in case, case history written by different habit doctors the unification on term has been done into also, the case where avoiding since the description habit of different doctors is different, and causing the misunderstanding to case information generation, ensure that and exchange smoothness between Different hospital doctor;It ensure that the matching degree between standard name entity and title, and then ensure that the accuracy of standard retrieval, improve recall precision;Improve the management effect to history case.

Description

A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing
Technical field
The present invention relates to field of artificial intelligence more particularly to a kind of case management method, apparatus, computer equipment and Readable storage medium storing program for executing.
Background technique
Strictly speaking current record management system belongs to semi-structured product, have to the partial action of pure natural language Limit.For case history content, if doctor is structured way input, which can be structured.If doctor's input It is natural language text, then the part cannot be structured.In addition traditional product, which needs case history at the very start, is recorded with the system Enter, it is helpless for already existing electronic health record;
In addition for the natural language part in semi-structured electronic health record, different doctors has different writing styles, May there are different address or abbreviation, the exchange being unfavorable between doctor hospital to same disease or symptom.
Summary of the invention
The object of the present invention is to provide a kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing, are used for It solves the problems, such as of the existing technology.
To achieve the above object, the present invention provides a kind of case management method, is based on artificial intelligence, comprising:
Case is filed step, by naming entity to replace with filing in the text information of the unstructured part of case Title, and standard case is made in the case;
Case searching step, identification retrieves the retrieval in information and names entity, and names entity set according to the retrieval Standard names entity, names entity search criteria case by the standard.
In above scheme, the case method of filing includes:
U1: the text information of the unstructured part of case is received;
U2: identify that filing in the text information names entity;
U3: it files described in identification and names title corresponding to entity;
U4: it names entity to replace with corresponding title filing in text information, standard case is made.
In above scheme, the step U2 includes:
U21: real number text vector corresponding to each text or punctuate in text information is successively obtained;
U22: it arranges each real number text vector and obtains text vector sequence;
U23: the prediction result of text vector sequence is obtained using mature text network model;
U24: annotated sequence is obtained according to the prediction result;
U25: it is filed according to annotated sequence acquisition and names entity.
In above scheme, the step U3 includes:
U31: entity is named to be input in knowledge mapping by filing;
U32: it is obtained in knowledge mapping and names several most similar approximate file of entity to name entity with filing;
U33: by with file name between entity with longest common subsequence it is approximate file name entity set for mark Quasi- title.
In above scheme, the case search method includes:
S1: retrieval information is received;
S2: entity is named in the retrieval in the identification retrieval information;
S3: retrieval name entity is input in knowledge mapping, and is obtained with retrieval name entity most in knowledge mapping Several similar approximations, which are filed, names entity;
S4: name entity set for standard approximate file between retrieval name entity with longest common subsequence Name entity;
S5: the standard case for naming entity retrieval that there is the standard to name entity according to standard.
In above scheme, the step S2 includes:
S21: it successively obtains real number corresponding to each text or punctuate in retrieval information and retrieves vector;
S22: it arranges each real number retrieval vector and obtains retrieval sequence vector;
S23: the prediction result of retrieval sequence vector is obtained using mature retrieval network model;
S24: annotated sequence is obtained according to the prediction result;
S25: retrieval name entity is obtained according to annotated sequence.
In above scheme, the step S3 includes:
S31: a character is at least extracted in retrieval name entity as retrieval subsequence;
S32: using retrieval subsequence approximation is retrieved in knowledge mapping file and name entity;
S33: it names the quantity of entity to be greater than limit value if the approximation retrieved is filed, is extracted again filing to name in entity One character is simultaneously loaded into retrieval subsequence, and repeats S32 again;
S34: if the approximation retrieved file name the quantity of entity no more than in limit value, or retrieval subsequence character with It files and then stops retrieving when naming the character in entity completely the same, and obtain the approximation that retrieves and file and name entity.
To achieve the above object, the present invention also provides a kind of case management devices, comprising:
Profiling module names entity to replace with for filing in the text information by the unstructured part by case Title, and build a case and standard case is made;
Retrieval module retrieves the retrieval in information for identification and names entity, and names entity set according to the retrieval Standard names entity, names entity search criteria case by the standard.
To achieve the above object, it the present invention also provides a kind of computer equipment, including memory, processor and is stored in On memory and the computer program that can run on a processor, the processor are realized above-mentioned when executing the computer program The step of method.
To achieve the above object, the present invention also provides computer readable storage mediums, are stored thereon with computer program, institute State the step of above method is realized when computer program is executed by processor.
A kind of case management method, apparatus, computer system and readable storage medium storing program for executing provided by the invention, pass through mould of filing Filing in the text information of the unstructured part of case is named entity by block, replaces with title and standard disease is made Example;Therefore, it not only eliminates since some cases information is difficult structuring write-in case, the key message in case is caused to lack Risk;And it also achieves and production is standardized to case, generate standard case, therefore will be written by different habit doctors Case history has done the unification on term, improves the accuracy of case information, avoids since the description habit of different doctors is different, And the case where causing the misunderstanding to case information generation, the Operative risk of patient is greatly reduced, ensure that Different hospital is cured It is exchanged between life smooth;
Since standard case is to identify the retrieval in retrieval information by retrieval module made of profiling module Entity is named, and names entity set standard to name entity according to the retrieval, then name entity search criteria disease by standard Example ensure that the matching degree between standard name entity and title, and then ensure that the accuracy of standard retrieval, improve Recall precision;
Also history case can be switched to standard case using profiling module and stored, improve the management effect to history case Fruit.
Detailed description of the invention
Fig. 1 is that case is filed the flow chart of method in case management embodiment of the method one of the present invention;
Fig. 2 is the flow chart of case search method in case management embodiment of the method one of the present invention;
Fig. 3 is the program module schematic diagram of case management Installation practice two of the present invention;
Fig. 4 is the hardware structural diagram of computer equipment in computer system embodiment three of the present invention.
Appended drawing reference:
1, case management device 2, computer equipment 11, profiling module 12, retrieval module
111, received text unit 112, Entity recognition unit 113, title recognition unit
114, title replacement unit 115, structuring receiving unit 121, information receiving unit
122, retrieval unit 123, retrieval solid element 124, entity set unit
125, case retrieval unit 21, memory 22, processor
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not For limiting the present invention.Based on the embodiments of the present invention, those of ordinary skill in the art are not before making creative work Every other embodiment obtained is put, shall fall within the protection scope of the present invention.
A kind of case management method, apparatus, computer system and readable storage medium storing program for executing provided by the invention are suitable for nature Language Processing field, to provide a kind of case management method based on profiling module and retrieval.It is provided by the invention, by filing Filing in the text information of the unstructured part of case is named entity by module, replaces with title and standard disease is made Example;Therefore, it not only eliminates since some cases information is difficult structuring write-in case, the key message in case is caused to lack Risk;And it also achieves and production is standardized to case, generate standard case, therefore will be written by different habit doctors Case history has done the unification on term, improves the accuracy of case information, avoids since the description habit of different doctors is different, And the case where causing the misunderstanding to case information generation, the Operative risk of patient is greatly reduced, ensure that Different hospital is cured It is exchanged between life smooth;
Since standard case is to identify the retrieval in retrieval information by retrieval module made of profiling module Entity is named, and names entity set standard to name entity according to the retrieval, then name entity search criteria disease by standard Example ensure that the matching degree between standard name entity and title, and then ensure that the accuracy of standard retrieval, improve Recall precision;
Also history case can be switched to standard case using profiling module and stored, improve the management effect to history case Fruit.
Embodiment one
Fig. 1 and Fig. 2 are please referred to, a kind of case management method of the present embodiment based on artificial intelligence technology and utilizes case Managing device, comprising:
Case is filed step, by naming entity to replace with filing in the text information of the unstructured part of case Title, and standard case is made in the case;
Case searching step names entity by the retrieval in identification retrieval information, and names entity according to the retrieval Established standards name entity, name entity search criteria case by the standard.
Specifically, the case method of filing includes:
U1: the text information of the unstructured part of case is received;
U2: identify that filing in the text information names entity;
U3: it files described in identification and names title corresponding to entity;
U4: it names entity to replace with corresponding title filing in text information, standard case is made.
Preferably, the case method of filing further includes;
U0: the selection information of structure part in the case of case management side's output is received.
Further, medical record management side is received to the selection information of the drop-down list of structure part in case.
Optionally, case management side is received to the selection information of the choice box of structure part in case.
Specifically, the step U2 includes:
U21: real number text vector corresponding to each text or punctuate in text information is successively obtained;
In this step, extracts single text or punctuate in text information and utilized as image information features information Word2vec model calculation image information features information acquisition real number text vector, wherein each real number text vector represents one Text or punctuate;
U22: it arranges each real number text vector and obtains text vector sequence;
U23: the prediction result of text vector sequence is obtained using mature text network model;
In this step, mature text network model is by LSTM neural network, CRF neural network and feedforward neural network structure At.
U24: annotated sequence is obtained according to the prediction result;
In this step, it is superimposed drug prediction result and operation estimation result obtains annotated sequence;
U25: it is filed according to annotated sequence acquisition and names entity;
In this step, according to annotated sequence and using mark collection in text information text or punctuate segment, and It obtains operation and files and name entity and drug to file to name entity.
For example, text information is that " today patient Zhang, the morning have taken morning-night and antipyretic, done appendix and cut afternoon Except operation ";
It extracts single text or punctuate in the text information and utilizes word2vec model as image information features information Operational character characteristic information information acquisition real number text vector, wherein each real number text vector represents a text or punctuate;
Utilize word2vec model calculation image information features information acquisition real number text vector, wherein word2vec is just It is a kind of efficient algorithm model that word is characterized as to real number value text vector, utilizes the thought of deep learning, can pass through Processing to content of text, is reduced to the text vector operation in this vector space of K Balakrishnan by training, and text vector is spatially Similarity can be used to indicate similar on text semantic;
Word2vec model successively carries out operation to the image information features information in text information, will obtain at least by one The text vector sequence of a real number text vector composition;By text sequence vector typing maturation text network model, and obtain Prediction result;
So prediction result corresponding to text sequence vector is, " OOOOOOOOOOOOMMMOMMMOOOOOSSSSSS ", i.e., " modern day (O) (O) suffers from (O) person (O) (O) certain (O), (O) noon on (O) (O) clothes (O) have been with (O) (O) white (M) plus (M) black (M) and (O) moves back (M) burning (M) medicine (M), and (O) noon (O), which does, under (O) (O) (O) Late (S) tail (S) cuts (S) except (S) hand (S) art (S) ";
MMM, MMM, the SSSSSS in prediction result are extracted, as annotated sequence, further according to the annotated sequence in text Position and quantity in sequence vector, which extract corresponding drug from text information and file, names entity " morning-night ", " brings down a fever Medicine ", and operation file and name entity " operation for appendicitis ".
Further, step U23 includes:
Text vector sequence typing forehead neural network is carried out dimensionality reduction by U23-01, and forms forehead text vector;
U23-02 is by the forehead text vector typing LSTM neural network to obtain character text vector;
U23-03 is by the character text vector typing CRF neural network to obtain prediction result.
Preferably, mark collection is [O, S, M], and non-file names entity to mark O, and operation, which is filed, names entity to be labeled as S, medicine Object, which is filed, names entity to be labeled as M.
Preferably, in U23, mature text network model includes drug maturation text network model and mature text of performing the operation Network model;It is utilized respectively drug maturation text network model and mature text network model of performing the operation obtains text vector sequence Drug prediction result and operation estimation result.
Preferably, in step U23, the maturation text network model is obtained by following steps training:
U23-11: real number text vector corresponding to each text or punctuate in training text information is successively obtained;Wherein, The annotated sequence of the training text information is known standard annotated sequence, meanwhile, there is life of filing in training text information Name entity;
U23-12: it is arranged successively each real number text vector and obtains training text sequence vector;
U23-13: the prediction result of training text sequence vector is obtained using original text network model;Wherein, initial text Present networks model is made of LSTM neural network, CRF neural network and feedforward neural network;
U23-14: training annotated sequence is obtained according to the prediction result of the training text sequence vector;
U23-15: training text vector annotated sequence is compared using customized loss function and received text vector marks The difference of training text vector annotated sequence and received text vector annotated sequence is converted real number value by sequence;Such as training text This sequence vector is " OOSSS ", and received text sequence vector is " OOMSS ", then training text sequence vector and received text Real number value between sequence vector is 1.
U23-16: by real number value using gradient descent method adjust original text network model in feedforward neural network, The weight and bias of the hidden layer of LSTM neural network and CRF neural network, until the real number value terminates after no longer changing;
U23-17 repeats step U23-11-U23-16 using multiple and different training text information, and finally obtains mature Text network model.
Specifically, being based on the example above, in the U23-11, extracts single text or punctuate in the text information and make For image information features information, operation acquisition is carried out to the image information features information in text information using word2vec model Training text vector.
Further, in the U23-12, by word2vec model, to the character in text information, successively operation will Obtain the training text sequence vector that is made of at least one training text vector, the training text sequence vector with text to The form of amount indicates.
Further, in the U23-13, firstly, by training text sequence vector successively typing original text network mould The forehead neural network of type, and the training text sequence vector is transported using the hidden layer of the forehead neural network It calculates, the forehead training text vector exported with the output layer obtained by forehead neural network;
Wherein, forehead neural network by the calculating to training text sequence vector, to the training text vector Sequence carries out dimensionality reduction, and obtains forehead training text vector.
Secondly, by the LSTM neural network of forehead training text vector typing original text network model, and described in utilization The hidden layer of LSTM neural network carries out operation to the forehead training text vector, to obtain by the LSTM neural network The character training text vector of output layer output;Character training sequence is obtained by the character training text vector;
Preferably, the LSTM neural network has class categories, and the class categories include: BM, IM, BS, IS and O; The classifying rules of the class categories includes:
The BM names the lead-in of entity for indicating that drug is filed;
The BI names the middle word or tail word of entity for indicating that drug is filed;
The BS names the lead-in of entity for indicating that operation is filed;
The BI names the middle word or tail word of entity for indicating that operation is filed;
The O is for indicating that non-file names entity;
The LSTM neural network is used to express each character feature by the calculating to forehead training text vector, by acquisition Information belongs to the character training text vector of each class categories probability;
There are five column for tool in the character training text vector, for indicating five class categories;
The row of the character training text vector is for indicating each image information features information;
Image information features corresponding to the behavior element value where element value in the character training text vector Information is classified as class categories corresponding to the element value where the element value in the character training text vector;
Element value in the character training text vector is its image information features information of the row in its column On class categories probability value;
Therefore, character training text vector is for expressing probability of each image information features information in each class categories Value.
Finally, by the CRF neural network of the character training text vector typing original text network model, and utilize institute The hidden layer for stating CRF neural network carries out rationalization operation to the character training text vector, with obtain CRF hide text to Amount, the CRF hide text vector for expressing each image information features information after rationalizing in each class categories Probability value;
The CRF, which is obtained, by the output layer of the CRF neural network hides each image information features letter in text vector The most probable value of breath, and after recording class categories corresponding to each most probable value, summarize each most probable value And its corresponding class categories form prediction result and export;
Wherein, the rationalization operation is that the maximum probability of each character information feature is obtained based on character training text vector Class categories, the process for rationalizing each character information feature;
For example, if detecting, the maximum probability class categories of two adjacent character information features are followed successively by O and IM or O And IS;
So, for the concept of class categories, the appearance of such case is very unreasonable;Therefore, institute will be passed through Rationalization operation is stated, the maximum probability class categories of above-mentioned two character information feature is made to become O and BM or O and BS.
Further, in the U23-14, class categories corresponding to the element value in prediction result are extracted, if this point Class classification is O, then generates the predicted value that result is O;
If the class categories are BM or IM, the predicted value that result is M is generated;
If the class categories are BS or IS, the predicted value that result is S is generated;
The corresponding class categories of each element in prediction result are successively extracted, and successively obtain predicted value,
Summarize the predicted value and obtains training annotated sequence.
Further, in the U23-15, using customized loss function to the trained annotated sequence and standard The carry out operation of annotated sequence obtains real number value.
Further, in the U23-16, according to the real number value using gradient descent method to original text network mould The hidden layer of the forehead neural network of type, LSTM neural network and CRF neural network is adjusted, until original text network mould The real number value of training annotated sequence and standard annotated sequence that type generates terminates after no longer changing;
Such as:, " OOOOOOOOOOOOMMMOMMMOOOOOSSSSSS ", i.e., " modern day (O) (O) suffer from (O) person (O) (O) certain (O), (O) (O) noon (O) clothes (O) are with (O) (O) white (M) plus (M) black (M) and (O) moves back (M) burning (M) medicine (M) on, under (O) (O) Noon (O), which does, (O) (O) late (S) tail (S) and cuts (S) except (S) hand (S) art (S) ";
Standard annotated sequence is MMM, MMM, SSSSSS;That is: white (M) plus (M) black (M) moves back (M) and burns (M) medicine (M), late (S) Tail (S) cuts (S) except (S) hand (S) art (S);
And the prediction result of training text sequence vector are as follows: " OOOOOOOOOOOOMMMOMMMOOOOOOOSSSS ", i.e., it is " modern (O) day (O) suffers from (O) person (O) (O) certain (O), and (O) noon (O) clothes (O) are with (O) (O) white (M) plus (M) black (M) and (O) on (O) It moves back (M) and burns (M) medicine (M), (O) noon (O), which does, under (O) (O) (O) late (O) tail (O) and cut (S) except (S) hand (S) art (S) ";So should The training annotated sequence of prediction result is MMM, MMM, SSSS;That is: white (M) plus (M) black (M) moves back (M) and burns (M) medicine (M), cut (S) Except (S) hand (S) art (S)
Training annotated sequence and standard annotated sequence are compared using customized loss function, by training annotated sequence and mark The difference of quasi- annotated sequence is converted into real number value;Real number is obtained according to the training annotated sequence of the example above and standard annotated sequence Value is 2.
Further, the calculation formula of the LSTM neural network is as follows:
i1=σ (Wxix1+Whih1-l+Wcic1-l+bi)
c1=(1-it)⊙ct-1+it⊙tanh(WxcXt+Whcht-l+bc)
Ot=σ (WxoXt+Wh0ht-l+Wc0ct-l+b0)
ht=ot⊙tanh(ct)
Wherein, σ is to take sigmoid to operate each element, and ⊙ represents dot product, XtFor input, htTo export, to described All W, h, c and b all random initializtions in formula, by forehead training text vector be input to the formula can be obtained by There is a character training text vector less.
Further, in CRF neural network, there is the prediction based on the transfer matrix formulating in Markov Chain The character training text vector is inputted the predictor formula by formula:
The maximum value of the predictor formula is solved, character training text vector is carried out to rationalize operation acquisition to realize, CRF hides text vector;
Wherein, the predictor formula is that wherein, y is the character training text vector, y=(y1, y2..., yn), A yi, yi+1Refer to yiA class categories are transferred to yi+1The probability of a class categories, M refer to the predicted value of predictor formula;
For the character training text vector X=(x1, x2 ..., xn) of input, CRF neural network will be according to character training Text vector predicts alternative sequence y=(y1, y2 ..., y n), and the score for defining this prediction is wherein pi, i-th of yi Softmax output in position is yiProbability, Ayi,yi+1For the transition probability from yi to yi+1;
Such as: the alternative sequence that CRF neural network is predicted according to character training text vector is O, O, IM, IM, O, O, So according to above-mentioned classifying rules, the A in predictor formula is calculated2,3, the value that very little will be obtained or even be negative, and then cause pre- The predicted value M for surveying formula will not very big or even very little;Therefore, CRF neural network will adjust character training text vector Section, the alternative sequence O, O, BM, IM, O, O for predicting the character training text vector after adjusting;At this point, predictor formula Predicted value will be maximum, and the character training text vector after adjusting will hide text vector for CRF.
Optionally, the customized loss function can be the model of Keras, and the model of Keras is functional expression, that is, is had Input, also there is output, and loss is certain error function of predicted value and true value, that is, the real number in the technical program Value.Keras itself has also carried many loss functions, such as mse, cross entropy, calls directly.And customized loss is wanted, Most natural method is exactly that the loss for copying Keras included is rewritten.
Specifically, the step U3 includes:
U31: entity is named to be input in knowledge mapping by filing;Wherein knowledge mapping (Knowledge Graph) is also known as For mapping knowledge domains, it is known as knowledge domain visualization or ken mapping map in books and information group, is explicit knowledge's development A series of a variety of different figures of process and structural relation describe knowledge resource and its carrier with visualization technique, excavate, divide Analysis, building, drafting and explicit knowledge and connecting each other between them.
In this step, using search server as the input terminal of knowledge mapping, described search server is elastic Search (ElasticSearch is the search server based on Lucene), the search server are based on keyword The knowledge mapping in database is retrieved with technology.
U32: it is obtained in knowledge mapping and names several most similar approximate file of entity to name entity with filing;
It not more than 10 approximations can be obtained in this step in knowledge mapping files and name entity.
U33: by with file name between entity with longest common subsequence it is approximate file name entity set for mark Quasi- title;
In this step, successively entity and each approximation is named to build to filing using LCS algorithm (longest common subsequence algorithm) Shelves name entity carries out operation, and acquisition, which is filed, names entity and each approximate common subsequence named between entity of filing;To building Shelves name entity names the common subsequence between entity to be ranked up with each approximate file, and obtains and files and names between entity Approximation with longest common subsequence, which is filed, names entity, and it is title which, which is filed, and names entity set.
Wherein, LCS is the abbreviation of Longest Common Subsequence, i.e. longest common subsequence.One sequence, If it is the subsequence of two or more known arrays, and it is longest in all subsequences, then is longest common subsequence.
For example, for char x []=" aabcd ";Sequence and aabc adjacent to each other be its subsequence, have sequence but It is non-conterminous abc is also its common subsequence.As long as that is, showing that each element belongs to given ordered series of numbers in sequence, is exactly Subsequence.Along with char y []=" 12abcabcd ";It contrasts just it can be concluded that longest common subsequence abcd.
Specifically, the step U32 includes:
U32-1: a character is at least extracted in entity as subsequence of filing filing to name;
U32-2: entity is named using filing subsequence in knowledge mapping and retrieving approximation to file;
U32-3: it names the quantity of entity to be greater than limit value if the approximation retrieved is filed, is mentioned again filing to name in entity It takes a character and is loaded into subsequence of filing, and repeat U32-2 again;
U32-4: name the quantity of entity no more than the character in limit value, or subsequence of filing if the approximation retrieved is filed Then stop retrieving with when filing and naming the character in entity completely the same, and obtains the approximation that retrieves and file and name entity.
Specifically, the case search method includes:
S1: retrieval information is received;
S2: entity is named in the retrieval in the identification retrieval information;
S3: retrieval name entity is input in knowledge mapping, and is obtained with retrieval name entity most in knowledge mapping Several similar approximations, which are filed, names entity;
Wherein, using search server as the input terminal of knowledge mapping, described search server is elastic search (ElasticSearch is the search server based on Lucene), the search server are based on keyword match technology Knowledge mapping in database is retrieved;
S4: name entity set for standard approximate file between retrieval name entity with longest common subsequence Name entity;
S5: the standard case for naming entity retrieval that there is the standard to name entity according to standard.
Specifically, the step S2 includes:
S21: it successively obtains real number corresponding to each text or punctuate in retrieval information and retrieves vector;
In this step, extracts single text or punctuate in retrieval information and utilize word2vec mould as image information features Type operational character characteristic information obtains real number and retrieves vector, wherein each real number retrieval vector represents a text or punctuate.
S22: it arranges each real number retrieval vector and obtains retrieval sequence vector;
S23: the prediction result of retrieval sequence vector is obtained using mature retrieval network model;
Wherein, mature retrieval network model is by trained LSTM neural network, CRF neural network and Feedforward Neural Networks Network is constituted.
Preferably, in S23, mature retrieval network model includes drug maturation retrieval network model and the mature retrieval of operation Network model;It is utilized respectively drug maturation retrieval network model and mature retrieval network model of performing the operation obtains retrieval sequence vector Drug prediction result and operation estimation result.
S24: annotated sequence is obtained according to the prediction result;
In this step, it is superimposed drug prediction result and operation estimation result obtains annotated sequence.
S25: retrieval name entity is obtained according to annotated sequence;
In this step, the text or punctuate retrieved in information are segmented according to annotated sequence and using mark collection, And it obtains operation and files and name entity and drug to file to name entity.
It is utilized specifically, extracting single text or punctuate in the retrieval information as image information features information Word2vec model calculation image information features information acquisition real number retrieves vector, wherein each real number retrieval vector represents one Text or punctuate;
Vector is retrieved using word2vec model calculation image information features information acquisition real number, wherein word2vec is just It is a kind of efficient algorithm model that word is characterized as to real number value retrieval vector, utilizes the thought of deep learning, can pass through Processing to content of text is reduced to the retrieval vector operation in K dimension retrieval vector space by training, and is retrieved in vector space Similarity can be used to indicate similar on text semantic;
Word2vec model successively carries out operation to the image information features information in retrieval information, will obtain at least by one The retrieval sequence vector of a real number retrieval vector composition;By the retrieval sequence vector typing maturation retrieval network model, and obtain Prediction result.
Such as: retrieval information is " morning-night and operation for appendicitis ";So annotated sequence corresponding to the retrieval information For, " MMMOSSSSSS ", i.e. " white (M) plus (M) black (M) and (O) late (S) tail (S) cuts (S) except (S) hand (S) art (S) ";Based on mark MMM, the SSSSSS in sequence are infused, corresponding drug is extracted from retrieval information and files and name entity " morning-night ", and operation It files and names entity " operation for appendicitis ".
Further, step S23 includes:
S23-01 will retrieve sequence vector typing forehead neural network and carry out dimensionality reduction, and form forehead retrieval vector;
S23-02 is by forehead retrieval vector typing LSTM neural network to obtain character retrieval vector;
S23-03 is by the character retrieval vector typing CRF neural network to obtain prediction result.
Preferably, mark collection is [O, S, M], and non-file names entity to mark O, and operation, which is filed, names entity to be labeled as S, medicine Object, which is filed, names entity to be labeled as M.
Preferably, in S23, mature retrieval network model includes drug maturation retrieval network model and the mature retrieval of operation Network model;It is utilized respectively drug maturation retrieval network model and mature retrieval network model of performing the operation obtains retrieval sequence vector Drug prediction result and operation estimation result.
Preferably, in step S23, the maturation retrieval network model is obtained by following steps training:
S23-11: it successively obtains real number corresponding to each text or punctuate in training retrieval information and retrieves vector;Wherein, The annotated sequence of the training retrieval information is known standard annotated sequence, meanwhile, there is life of filing in training retrieval information Name entity;
S23-12: being arranged successively each real number retrieval vector and obtains training retrieval sequence vector;
S23-13: the prediction result of training retrieval sequence vector is obtained using initial retrieval network model;Wherein, initial inspection Rope network model is made of LSTM neural network, CRF neural network and feedforward neural network;
S23-14: training annotated sequence is obtained according to the prediction result of the training retrieval sequence vector;
S23-15: training retrieval vector annotated sequence is compared using customized loss function and standard retrieval vector marks The difference of training retrieval vector annotated sequence and standard retrieval vector annotated sequence is converted real number value by sequence;Such as training inspection Rope sequence vector is " OOSSS ", and standard retrieval sequence vector is " OOMSS ", then training retrieval sequence vector and standard retrieval Real number value between sequence vector is 1.
S23-16: by real number value using gradient descent method adjust original text network model in feedforward neural network, The weight and bias of the hidden layer of LSTM neural network and CRF neural network, until the real number value terminates after no longer changing;
S23-17 repeats step U23-11-U23-16 using multiple and different training text information, and finally obtains mature Text network model.
Specifically, being based on the example above, in the S23-11, extracts single text or punctuate in the retrieval information and make For image information features information, operation acquisition is carried out to the image information features information in retrieval information using word2vec model Training retrieval vector.
Further, in the S23-12, by word2vec model, to the character in retrieval information, successively operation will Obtain by least one train retrieval vector form training retrieval sequence vector, it is described training retrieval sequence vector with retrieve to The form of amount indicates.
Further, in the S23-13, firstly, sequence vector successively typing initial retrieval network mould is retrieved in training The forehead neural network of type, and the training retrieval sequence vector is transported using the hidden layer of the forehead neural network It calculates, the forehead training retrieval vector exported with the output layer obtained by forehead neural network;
Wherein, the calculating by retrieving sequence vector to training of forehead neural network, to the training retrieval vector Sequence carries out dimensionality reduction, and obtains forehead training retrieval vector.
Secondly, by the LSTM neural network of forehead training retrieval vector typing initial retrieval network model, and described in utilization The hidden layer of LSTM neural network carries out operation to forehead training retrieval vector, to obtain by the LSTM neural network The character training retrieval vector of output layer output;Character training sequence is obtained by character training retrieval vector;
Preferably, the LSTM neural network has class categories, and the class categories include: BM, IM, BS, IS and O; The classifying rules of the class categories includes:
The BM names the lead-in of entity for indicating that drug is filed;
The BI names the middle word or tail word of entity for indicating that drug is filed;
The BS names the lead-in of entity for indicating that operation is filed;
The BI names the middle word or tail word of entity for indicating that operation is filed;
The O is for indicating that non-file names entity.
Acquisition is used to express each character feature by the calculating to forehead training retrieval vector by the LSTM neural network Information belongs to the character training retrieval vector of each class categories probability;
There are five column for tool in the character training retrieval vector, for indicating five class categories;
The row of the character training retrieval vector is for indicating each image information features information;
Image information features corresponding to the behavior element value where element value in character training retrieval vector Information, the character train and are classified as class categories corresponding to the element value where retrieving the element value in vector;
Element value in the character training retrieval vector is its image information features information of the row in its column On class categories probability value;
Therefore, character training retrieval vector is for expressing probability of each image information features information in each class categories Value.
Finally, by the CRF neural network of character training retrieval vector typing initial retrieval network model, and utilize institute State the hidden layer of CRF neural network and rationalization operation carried out to character training retrieval vector, with obtain CRF hide retrieval to Amount, the CRF hide retrieval vector for expressing each image information features information after rationalizing in each class categories Probability value;
The CRF, which is obtained, by the output layer of the CRF neural network hides each image information features letter in retrieval vector The most probable value of breath, and after recording class categories corresponding to each most probable value, summarize each most probable value And its corresponding class categories form prediction result and export;
Wherein, described to rationalize operation to retrieve the maximum probability that vector obtains each character information feature based on character training Class categories, the process for rationalizing each character information feature;
For example, if detecting, the maximum probability class categories of two adjacent character information features are followed successively by O and IM or O And IS;
So, for the concept of class categories, the appearance of such case is very unreasonable;Therefore, institute will be passed through Rationalization operation is stated, the maximum probability class categories of above-mentioned two character information feature is made to become O and BM or O and BS.
Further, in the S23-14, class categories corresponding to the element value in prediction result are extracted, if this point Class classification is O, then generates the predicted value that result is O;
If the class categories are BM or IM, the predicted value that result is M is generated;
If the class categories are BS or IS, the predicted value that result is S is generated;
The corresponding class categories of each element in prediction result are successively extracted, and successively obtain predicted value,
Summarize the predicted value and obtains training annotated sequence.
Further, in the S23-15, using customized loss function to the trained annotated sequence and standard The carry out operation of annotated sequence obtains real number value;
Further, in the S23-16, according to the real number value using gradient descent method to original text network mould The hidden layer of the forehead neural network of type, LSTM neural network and CRF neural network is adjusted, until original text network mould The real number value of training annotated sequence and standard annotated sequence that type generates terminates after no longer changing;
Such as: standard annotated sequence are as follows: MMM, SSSSSS;I.e. " white (M) plus (M) black (M), late (S) tail (S) are cut (S) and are removed (S) hand (S) art (S) ";And the prediction result of training text sequence vector are as follows: " MMMOSSSSSM ", i.e. " white (M) plus (M) black (M) (O) door screen (S) tail (S) cuts (S) except (S) hand (S) art (M) ";Therefore the training annotated sequence of the prediction result are as follows: MMM, SSSSS, M;
Therefore, training annotated sequence and standard annotated sequence are compared using customized loss function, training is marked into sequence The difference of column and standard annotated sequence is converted into real number value;It is obtained according to the training annotated sequence of the example above and standard annotated sequence Obtaining real number value is 1.
Further, the calculation formula of the LSTM neural network is as follows:
i1=σ (WxiX1+Whih1-l+Wcict-1+bi)
c1=(1-it)⊙ct-1+it⊙tanh(WxcXt+Whcht-l+bc)
ot=σ (WxoXt+Wh0ht-l+Wc0ct-1+b0)
ht=ot⊙tanh(ct)
Wherein, σ is to take sigmoid to operate each element, and ⊙ represents dot product, XtFor input, htTo export, to described All W, h, c and b all random initializtions in formula, by forehead training retrieval vector be input to the formula can be obtained by There is a character training retrieval vector less.
Further, in CRF neural network, there is the prediction based on the transfer matrix formulating in Markov Chain Character training retrieval vector is inputted the predictor formula by formula:
The maximum value of the predictor formula is solved, character training retrieval vector is carried out rationalizing operation acquisition to realize, CRF hides retrieval vector;
Wherein, the predictor formula is that wherein, y is character training retrieval vector, y=(y1, y2..., yn), A yi, yi+1Refer to yiA class categories are transferred to yi+1The probability of a class categories, M refer to the predicted value of predictor formula;
For character training retrieval vector X=(x1, x2 ..., xn) of input, CRF neural network will be according to character training It retrieves vector forecasting alternative sequence y=(y1, y2 ..., yn), the score for defining this prediction is wherein p i, and i-th of yi Softmax output in position is yiProbability, Ayi,yi+1For the transition probability from yi to yi+1;
Such as: CRF neural network is O, O, IM, IM, O, O according to the alternative sequence predicted of character training retrieval vector, So according to above-mentioned classifying rules, the A in predictor formula is calculated2,3, the value that very little will be obtained or even be negative, and then cause pre- The predicted value M for surveying formula will not very big or even very little;Therefore, CRF neural network will adjust character training retrieval vector Section, the alternative sequence O, O, BM, IM, O, O for predicting the character training retrieval vector after adjusting;At this point, predictor formula Predicted value will be maximum, and the character training retrieval vector after adjusting will hide retrieval vector for CRF.
Optionally, the customized loss function can be the model of Keras, and the model of Keras is functional expression, that is, is had Input, also there is output, and loss is certain error function of predicted value and true value, that is, the real number in the technical program Value.Keras itself has also carried many loss functions, such as mse, cross entropy, calls directly.And customized loss is wanted, Most natural method is exactly that the loss for copying Keras included is rewritten.
Specifically, step S3 includes:
S31: a character is at least extracted in retrieval name entity as retrieval subsequence;
S32: using retrieval subsequence approximation is retrieved in knowledge mapping file and name entity;
S33: it names the quantity of entity to be greater than limit value if the approximation retrieved is filed, is extracted again filing to name in entity One character is simultaneously loaded into retrieval subsequence, and repeats S32 again;
S34: if the approximation retrieved file name the quantity of entity no more than in limit value, or retrieval subsequence character with It files and then stops retrieving when naming the character in entity completely the same, and obtain the approximation that retrieves and file and name entity.
Embodiment two
Referring to Fig. 3, a kind of case management device 1 of the present embodiment, comprising:
Profiling module 11 names entity to replace for filing in the text information by the unstructured part by case For title, and standard case is made in the case;
Retrieval module 12, for naming entity by the retrieval in identification retrieval information, and it is real according to retrieval name Body established standards name entity, name entity search criteria case by the standard.
Specifically, profiling module includes:
Received text unit 111, the text information of the unstructured part for receiving case;
Entity recognition unit 112, filing in the text information names entity for identification;
Title recognition unit 113, described file names title corresponding to entity for identification;Wherein, title is known Other unit has knowledge mapping and search server.
Title replacement unit 114, for naming entity to replace with corresponding standard name filing in text information Claim, standard case is made.
Structuring receiving unit 115, for receiving the selection information of structure part in the case that case management side exports.
Specifically, retrieval module 12 includes:
Information receiving unit 121, for receiving retrieval information;
Retrieval unit 122, entity is named in the retrieval in the retrieval information for identification;
Solid element 123 is retrieved, is input in knowledge mapping for name entity will to be retrieved, and obtained in knowledge mapping Entity is named with retrieval name several most similar approximate file of entity;Wherein, retrieval solid element have knowledge mapping and Search server.
Entity set unit 124, between retrieval name entity will there is the approximate of longest common subsequence to file Naming entity set is that standard names entity;
Case retrieval unit 125, the standard for naming entity retrieval to have standard name entity according to standard are sick Example.
The technical program is based on artificial intelligence, semantic parsing is carried out to text information using profiling module 11, with realization pair The natural language processing of text information, and then obtain to file and name entity, then obtain and file by knowledge mapping and name entity Filing in text information is finally named entity to replace with title by matched title;
The technical program is based on artificial intelligence, semantic parsing is carried out using 12 pairs of retrieval information of retrieval module, with realization pair The natural language processing of information is retrieved, and then obtains retrieval name entity, then obtains by knowledge mapping and retrieves name entity Matched standard names entity, the standard case for finally naming entity retrieval that there is standard to name entity according to standard;
Since standard name entity and title are obtained by knowledge mapping, thus it is guaranteed that standard is ordered The matching degree and consistency of name entity and title.
Embodiment three:
The present embodiment also provides a kind of computer equipment 2, can such as execute the smart phone, tablet computer, notes of program This computer, desktop computer, rack-mount server, blade server, tower server or Cabinet-type server (including independence Server or multiple servers composed by server cluster) etc..The computer equipment 2 of the present embodiment include at least but It is not limited to: memory 21, the processor 22 of connection can be in communication with each other by system bus, as shown in Figure 3.It should be pointed out that Fig. 3 illustrates only the computer equipment 2 with component-, it should be understood that be not required for implementing all components shown, The implementation that can be substituted is more or less component.
In the present embodiment, memory 21 (i.e. readable storage medium storing program for executing) includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD etc..In some embodiments, memory 21 can be the internal storage unit of computer equipment 2, such as the computer The hard disk or memory of equipment 2.In further embodiments, memory 21 is also possible to the External memory equipment of computer equipment 2, Such as the plug-in type hard disk being equipped in the computer equipment 2, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card) etc..Certainly, memory 21 can also both be set including computer Standby 2 internal storage unit also includes its External memory equipment.In the present embodiment, memory 21 is installed on meter commonly used in storage Calculate machine equipment 2 method system and types of applications software, such as embodiment one case management device program code etc..This Outside, memory 21 can be also used for temporarily storing the Various types of data that has exported or will export.
Processor 22 can be in some embodiments central processing unit (Central Processing Unit, CPU), Controller, microcontroller, microprocessor or other data processing chips.The processor 22 is commonly used in control computer equipment 2 Group method.In the present embodiment, program code or processing data of the processor 22 for being stored in run memory 21, example Case management device is run, such as to realize the case management method of embodiment one.
Example IV:
The present embodiment also provides a kind of computer readable storage medium, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), random access storage device (RAM), static random-access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read only memory (PROM), magnetic storage, magnetic Disk, CD, server, App are stored thereon with computer program, program is realized when being executed by processor 22 using store etc. Corresponding function.The computer readable storage medium of the present embodiment is real when being executed by processor 22 for storing case management device The case management method of current embodiment one.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of case management method is based on artificial intelligence characterized by comprising
Case is filed step: by naming entity to replace with standard filing in the text information of the unstructured part of case Title, and standard case is made in the case;
Case searching step: entity is named in the retrieval in identification retrieval information, and names entity set standard according to the retrieval Entity is named, entity search criteria case is named by the standard.
2. case management method according to claim 1, which is characterized in that the case method of filing includes:
U1: the text information of the unstructured part of case is received;
U2: identify that filing in the text information names entity;
U3: it files described in identification and names title corresponding to entity;
U4: it names entity to replace with corresponding title filing in text information, standard case is made.
3. case management method according to claim 2, which is characterized in that the step U2 includes:
U21: real number text vector corresponding to each text or punctuate in text information is successively obtained;
U22: it arranges each real number text vector and obtains text vector sequence;
U23: the prediction result of text vector sequence is obtained using mature text network model;
U24: annotated sequence is obtained according to the prediction result;
U25: it is filed according to annotated sequence acquisition and names entity.
4. case management method according to claim 2, which is characterized in that the step U3 includes:
U31: entity is named to be input in knowledge mapping by filing;
U32: it is obtained in knowledge mapping and names several most similar approximate file of entity to name entity with filing;
U33: by with file name between entity with longest common subsequence it is approximate file name entity set be standard name Claim.
5. case management method according to claim 1, which is characterized in that the case search method includes:
S1: retrieval information is received;
S2: entity is named in the retrieval in the identification retrieval information;
S3: retrieval name entity is input in knowledge mapping, and is obtained in knowledge mapping and names entity most close with retrieval Several approximations file and name entity;
S4: name entity set for standard name approximate file between retrieval name entity with longest common subsequence Entity;
S5: the standard case for naming entity retrieval that there is the standard to name entity according to standard.
6. case management method according to claim 5, which is characterized in that
The step S2 includes:
S21: it successively obtains real number corresponding to each text or punctuate in retrieval information and retrieves vector;
S22: it arranges each real number retrieval vector and obtains retrieval sequence vector;
S23: the prediction result of retrieval sequence vector is obtained using mature retrieval network model;
S24: annotated sequence is obtained according to the prediction result;
S25: retrieval name entity is obtained according to annotated sequence.
7. case management method according to claim 5, which is characterized in that the step S3 includes:
S31: a character is at least extracted in retrieval name entity as retrieval subsequence;
S32: using retrieval subsequence approximation is retrieved in knowledge mapping file and name entity;
S33: it names the quantity of entity to be greater than limit value if the approximation retrieved is filed, extracts one in entity again filing to name Character is simultaneously loaded into retrieval subsequence, and repeats S32 again;
S34: if the approximation retrieved file name the quantity of entity no more than in limit value, or retrieval subsequence character with file Character in name entity then stops retrieving when completely the same, and obtains the approximation that retrieves and file and name entity.
8. a kind of case management device characterized by comprising
Profiling module names entity to replace with standard for filing in the text information by the unstructured part by case Title, and standard case is made in the case;Retrieval module retrieves the retrieval in information for identification and names entity, and root Entity is named according to retrieval name entity set standard, entity search criteria case is named by the standard.
9. a kind of computer equipment, can run on a memory and on a processor including memory, processor and storage Computer program, which is characterized in that the processor realizes any one of claim 1 to 7 institute when executing the computer program The step of stating case management method.
10. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program The step of any one of claim 1 to the 7 case management method is realized when being executed by processor.
CN201910374245.3A 2019-05-07 2019-05-07 A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing Pending CN110265098A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910374245.3A CN110265098A (en) 2019-05-07 2019-05-07 A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910374245.3A CN110265098A (en) 2019-05-07 2019-05-07 A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing

Publications (1)

Publication Number Publication Date
CN110265098A true CN110265098A (en) 2019-09-20

Family

ID=67914252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910374245.3A Pending CN110265098A (en) 2019-05-07 2019-05-07 A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing

Country Status (1)

Country Link
CN (1) CN110265098A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414393A (en) * 2020-03-26 2020-07-14 湖南科创信息技术股份有限公司 Semantic similar case retrieval method and equipment based on medical knowledge graph
CN111581976A (en) * 2020-03-27 2020-08-25 平安医疗健康管理股份有限公司 Method and apparatus for standardizing medical terms, computer device and storage medium
CN112309567A (en) * 2020-11-06 2021-02-02 浙江大学 Intelligent management system and method for clinical pharmacist workstation
CN112309519A (en) * 2020-10-26 2021-02-02 浙江大学 Electronic medical record medication structured processing system based on multiple models
CN112466472A (en) * 2021-02-03 2021-03-09 北京伯仲叔季科技有限公司 Case text information retrieval system
CN113393945A (en) * 2021-08-05 2021-09-14 中国医学科学院阜外医院 Clinical drug allergy management method, auxiliary device and system
CN113642562A (en) * 2021-08-30 2021-11-12 平安医疗健康管理股份有限公司 Data interpretation method, device and equipment based on image recognition and storage medium
CN114140810A (en) * 2022-01-30 2022-03-04 北京欧应信息技术有限公司 Method, apparatus and medium for structured recognition of documents
CN114550863A (en) * 2022-02-25 2022-05-27 首都医科大学附属北京安贞医院 Medical record generation method, device, system, equipment and storage medium
CN116821712A (en) * 2023-08-25 2023-09-29 中电科大数据研究院有限公司 Semantic matching method and device for unstructured text and knowledge graph

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076243A1 (en) * 2016-10-27 2018-05-03 华为技术有限公司 Search method and device
CN109471895A (en) * 2018-10-29 2019-03-15 清华大学 The extraction of electronic health record phenotype, phenotype name authority method and system
CN109657062A (en) * 2018-12-24 2019-04-19 万达信息股份有限公司 A kind of electronic health record text resolution closed-loop policy based on big data technology

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018076243A1 (en) * 2016-10-27 2018-05-03 华为技术有限公司 Search method and device
CN109471895A (en) * 2018-10-29 2019-03-15 清华大学 The extraction of electronic health record phenotype, phenotype name authority method and system
CN109657062A (en) * 2018-12-24 2019-04-19 万达信息股份有限公司 A kind of electronic health record text resolution closed-loop policy based on big data technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
上海东软载波微电子有限公司: "实例:最长公共长序列", 《东软载波单片机应用C程序设计》 *
青岛英谷教育科技股份有限公司 等: "ElasticSearch", 《大数据开发与应用》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111414393A (en) * 2020-03-26 2020-07-14 湖南科创信息技术股份有限公司 Semantic similar case retrieval method and equipment based on medical knowledge graph
CN111581976B (en) * 2020-03-27 2023-07-21 深圳平安医疗健康科技服务有限公司 Medical term standardization method, device, computer equipment and storage medium
CN111581976A (en) * 2020-03-27 2020-08-25 平安医疗健康管理股份有限公司 Method and apparatus for standardizing medical terms, computer device and storage medium
CN112309519A (en) * 2020-10-26 2021-02-02 浙江大学 Electronic medical record medication structured processing system based on multiple models
CN112309567A (en) * 2020-11-06 2021-02-02 浙江大学 Intelligent management system and method for clinical pharmacist workstation
CN112309567B (en) * 2020-11-06 2021-06-01 浙江大学 Intelligent management system and method for clinical pharmacist workstation
CN112466472A (en) * 2021-02-03 2021-03-09 北京伯仲叔季科技有限公司 Case text information retrieval system
CN113393945A (en) * 2021-08-05 2021-09-14 中国医学科学院阜外医院 Clinical drug allergy management method, auxiliary device and system
CN113642562A (en) * 2021-08-30 2021-11-12 平安医疗健康管理股份有限公司 Data interpretation method, device and equipment based on image recognition and storage medium
CN114140810A (en) * 2022-01-30 2022-03-04 北京欧应信息技术有限公司 Method, apparatus and medium for structured recognition of documents
CN114140810B (en) * 2022-01-30 2022-04-22 北京欧应信息技术有限公司 Method, apparatus and medium for structured recognition of documents
CN114550863A (en) * 2022-02-25 2022-05-27 首都医科大学附属北京安贞医院 Medical record generation method, device, system, equipment and storage medium
CN116821712A (en) * 2023-08-25 2023-09-29 中电科大数据研究院有限公司 Semantic matching method and device for unstructured text and knowledge graph
CN116821712B (en) * 2023-08-25 2023-12-19 中电科大数据研究院有限公司 Semantic matching method and device for unstructured text and knowledge graph

Similar Documents

Publication Publication Date Title
CN110265098A (en) A kind of case management method, apparatus, computer equipment and readable storage medium storing program for executing
CN111914054B (en) System and method for large-scale semantic indexing
CN109902145B (en) Attention mechanism-based entity relationship joint extraction method and system
Logeswaran et al. Sentence ordering and coherence modeling using recurrent neural networks
CN109325112B (en) A kind of across language sentiment analysis method and apparatus based on emoji
Suissa et al. Text analysis using deep neural networks in digital humanities and information science
CN110502621A (en) Answering method, question and answer system, computer equipment and storage medium
CN113704429A (en) Semi-supervised learning-based intention identification method, device, equipment and medium
CN110321426B (en) Digest extraction method and device and computer equipment
CN108959566A (en) A kind of medical text based on Stacking integrated study goes privacy methods and system
CN111222330B (en) Chinese event detection method and system
CN112287069A (en) Information retrieval method and device based on voice semantics and computer equipment
Qiu et al. Chinese Microblog Sentiment Detection Based on CNN‐BiGRU and Multihead Attention Mechanism
CN112668633B (en) Adaptive graph migration learning method based on fine granularity field
US20230394236A1 (en) Extracting content from freeform text samples into custom fields in a software application
Lei et al. An input information enhanced model for relation extraction
CN112699684A (en) Named entity recognition method and device, computer readable storage medium and processor
Perdana et al. Instance-based deep transfer learning on cross-domain image captioning
Li et al. [Retracted] Deep Unsupervised Hashing for Large‐Scale Cross‐Modal Retrieval Using Knowledge Distillation Model
CN116737947A (en) Entity relationship diagram construction method, device, equipment and storage medium
CN116796840A (en) Medical entity information extraction method, device, computer equipment and storage medium
Li et al. Named entity recognition in chinese electronic medical records based on the model of bidirectional long short-term memory with a conditional random field layer
CN114547313A (en) Resource type identification method and device
CN114513578A (en) Outbound method, device, computer equipment and storage medium
CN114328902A (en) Text labeling model construction method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination