WO2021179708A1 - 命名实体识别方法、装置、计算机设备及可读存储介质 - Google Patents

命名实体识别方法、装置、计算机设备及可读存储介质 Download PDF

Info

Publication number
WO2021179708A1
WO2021179708A1 PCT/CN2020/134882 CN2020134882W WO2021179708A1 WO 2021179708 A1 WO2021179708 A1 WO 2021179708A1 CN 2020134882 W CN2020134882 W CN 2020134882W WO 2021179708 A1 WO2021179708 A1 WO 2021179708A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity
target
candidate
entities
supplementary
Prior art date
Application number
PCT/CN2020/134882
Other languages
English (en)
French (fr)
Inventor
顾大中
张圣
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021179708A1 publication Critical patent/WO2021179708A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Definitions

  • This application relates to the field of natural language processing technology, and in particular to a named entity recognition method, device, computer equipment and readable storage medium.
  • Microbial information is very important in some medical literature on viral infections and bacterial infections.
  • the types of microorganisms and the treatment of related diseases The methods are closely related, such as the most common pneumonia, bacterial pneumonia, viral pneumonia, the treatment and diagnosis methods are very different, and the pneumonia caused by different types of viruses are also very different, so the microbiological information is accurately extracted from the medical literature. It has high business value.
  • the purpose of this application is to provide a named entity recognition method, device, computer equipment, and readable storage medium, which are used to solve the problem that the existing entity extraction based on dictionary matching microorganisms cannot consider entities with abbreviations or specific information, so that the accuracy rate is relatively high. Low technical problems.
  • the present application provides a named entity recognition method, which includes: obtaining medical text, preprocessing the medical text to obtain the text to be processed; extracting microbial entities from the text to be processed based on a preset dictionary, Obtain a target entity; generate multiple candidate abbreviated entities according to the first preset rule and the target entity, and use the first model to screen from the candidate abbreviated entities to obtain the candidate abbreviated entity corresponding to the entity as the target abbreviation Entity; generate multiple candidate supplementary entities according to a second preset rule and the target entity, and use a second model to screen the candidate supplementary entities to obtain a target supplementary entity; based on the target entity and the target abbreviated entity And the target supplementary entity generates target data.
  • this application also provides a named entity recognition device, including: an acquisition module for acquiring medical text, preprocessing the medical text to obtain the text to be processed; and an extraction module for acquiring a text based on a preset dictionary Perform microbial entity extraction on the to-be-processed text to obtain a target entity; the first processing module is configured to generate multiple candidate abbreviated entities according to a first preset rule and the entity, and use the first model to obtain a target entity from the candidate abbreviated entities.
  • the candidate abbreviated entity corresponding to the target entity is obtained as the target abbreviated entity; the second processing module is used to generate multiple candidate supplementary entities according to the second preset rule and the entity, and use the second model to pair The candidate supplementary entity is screened to obtain a target supplementary entity; a generating module is used to generate target data based on the target entity, the target abbreviated entity, and the target supplementary entity.
  • the present application also provides a computer device, the computer device including a memory, a processor, and a computer program stored in the memory and running on the processor, and the processor executes the computer program when the computer program is executed.
  • the following methods obtaining medical text, preprocessing the medical text to obtain the text to be processed; extracting microbiological entities from the text to be processed based on a preset dictionary to obtain a target entity; according to the first preset rule and the target
  • the entity generates multiple candidate abbreviated entities, and uses the first model to filter from the candidate abbreviated entities, and obtains candidate abbreviated entities corresponding to the entities as the target abbreviated entities; generates according to the second preset rule and the target entity A plurality of candidate supplementary entities, and the second model is used to screen the candidate supplementary entities to obtain a target supplementary entity; target data is generated based on the target entity, the target abbreviated entity, and the target supplementary entity.
  • the present application also provides a computer-readable storage medium, which includes multiple storage media, each of which stores a computer program, and when the computer program stored in the multiple storage media is executed by a processor
  • the following methods are jointly implemented: obtaining medical text, preprocessing the medical text, and obtaining the text to be processed; extracting microbiological entities from the text to be processed based on a preset dictionary to obtain the target entity;
  • the target entity generates a plurality of candidate abbreviated entities, and uses a first model to filter from the candidate abbreviated entities, and obtains candidate abbreviated entities corresponding to the entities as the target abbreviated entity; according to a second preset rule and the target
  • the entity generates a plurality of candidate supplementary entities, and uses a second model to screen the candidate supplementary entities to obtain a target supplementary entity; and generates target data based on the target entity, the target abbreviated entity, and the target supplementary entity.
  • This application first obtains the target entity (namely the full name entity) through dictionary matching, and then generates candidate abbreviated entities and uses the first model to determine the target abbreviated entity data, so as to realize the extraction of abbreviated entities in the process of extracting entities based on medical text, and then According to the generation of candidate supplementary entities and the use of the second model to judge each candidate supplementary entity, the extraction of some entities containing specific information (number, strain type, etc.) based on the process of extracting entities based on medical text is realized. Finally, all entities are collected to solve the problem. There is a technical problem that the entity extraction based on dictionary matching microorganisms cannot consider abbreviations or entities with specific information, so the accuracy rate is low.
  • FIG. 1 is a flowchart of Embodiment 1 of the named entity identification method according to this application.
  • FIG. 2 is a flowchart of generating multiple candidate abbreviated entities according to the first preset rule and the target entity in the first embodiment of the named entity recognition method according to this application.
  • FIG. 3 is a flow chart of using a first model to select candidate abbreviated entities from the candidate abbreviated entities to obtain candidate abbreviated entities corresponding to the target entity as the target abbreviated entity in the first embodiment of the named entity recognition method described in this application.
  • Fig. 4 is the first model of the named entity recognition method in the first embodiment of the application, before the candidate abbreviated entity is selected from the candidate abbreviated entities to obtain the candidate abbreviated entity corresponding to the target entity, and the candidate abbreviated entity is used as the target abbreviated entity, the The flow chart of the training of the first model.
  • FIG. 5 is a flowchart of generating multiple candidate supplementary entities according to a second preset rule and the target entity in the first embodiment of the named entity recognition method according to this application.
  • FIG. 6 is a flow chart of using the second model to screen the candidate supplementary entities to obtain the target supplementary entity in the first embodiment of the named entity recognition method according to this application.
  • FIG. 7 is a flowchart of training the second model before the candidate supplementary entity is screened by using the second model to obtain the target supplementary entity in the first embodiment of the named entity recognition method of this application.
  • FIG. 8 is a schematic diagram of the program modules of the second embodiment of the named entity recognition apparatus according to this application.
  • FIG. 9 is a schematic diagram of the hardware structure of the computer equipment in the third embodiment of the computer equipment of this application.
  • the technical solution of this application can be applied to the fields of artificial intelligence, smart city, digital medical care, blockchain and/or big data technology.
  • the data involved in this application such as medical text, entity, and/or target data, can be stored in a database, or can be stored in a blockchain, such as distributed storage through a blockchain, which is not limited in this application.
  • the named entity recognition method, device, computer equipment, and readable storage medium provided in this application are applicable to the field, in order to provide a named entity recognition based on the acquisition module, the extraction module, the first processing module, the second processing module, and the generation module method.
  • This application obtains medical texts through the acquisition module and performs preprocessing (specifically, normalization and morphological restoration operations, and elimination of singular and plural numbers, tenses, etc.), and uses the extraction module to extract entities based on the preprocessed medical texts and preset dictionaries , To obtain the target entity consistent with the preset dictionary, and then different from the prior art, the first processing module is used to generate multiple candidate abbreviated entities and the first model is used to screen out the target entity consistent with the target entity as the target abbreviated entity.
  • the second processing module generates multiple candidate supplementary entities and uses the second model to filter to obtain the target supplementary entity. Finally, the target entity, target abbreviated entity, and target supplementary entity are collected through the generation module to obtain target data.
  • Autonomous extraction solves the technical problem that the existing entity extraction based on dictionary matching microorganisms cannot consider abbreviations or entities with specific information, so that the accuracy rate is low.
  • a named entity recognition method of this embodiment is applied to the server side.
  • This application can be applied to a smart medical scene to promote the construction of a smart city, including the following steps.
  • S100 Obtain a medical text, preprocess the medical text, and obtain a text to be processed.
  • the preprocessing of the medical text in this solution includes, but is not limited to, normalization and morphological restoration operations, as well as elimination of singular and plural numbers, tenses, etc.
  • normalization can map data to the range of 0 to 1. It can also compare and weight indicators that facilitate different units or levels; morphological restoration is to remove the affix of a word and extract the main part of the word; eliminate singular and plural, tense, etc. by removing meaningless words
  • a, the, etc. other technical means for natural language preprocessing can also be used for this, so as to reduce the interference to the extraction based on the preset dictionary in the subsequent step S200.
  • S200 Perform microbial entity extraction on the to-be-processed text based on a preset dictionary to obtain a target entity.
  • the text is extracted based on the preset dictionary, that is, the entity data corresponding to the preset dictionary in the text is directly filtered.
  • the entity data corresponding to the preset dictionary in the text is directly filtered.
  • “Lactobacillus AU513B” can cause pneumonia.
  • AU513B if there is only the word “Lactobacillus” in the dictionary, then only “Lactobacillus” is extracted in this step. It should be noted that this place is directly obtained by the dictionary, so the full name of the entity can be directly obtained, but the entity abbreviation cannot be obtained.
  • S300 Generate multiple candidate abbreviated entities according to the first preset rule and the target entity, and use the first model to screen from the candidate abbreviated entities to obtain candidate abbreviated entities corresponding to the entities as the target abbreviated entity.
  • the above first model includes two Chars processed in parallel CNN network, one is used to receive entity data, and the other is used to input candidate abbreviated entity data one by one. After the two Char CNN networks, the fully connected layer is connected to determine whether the input candidate abbreviated entity is the abbreviation of the target entity.
  • the multiple candidate abbreviated entities are generated according to the first preset rule and the target entity, referring to FIG. 2, including the following.
  • S311 Obtain a target entity, and extract a string of a preset length according to the target entity.
  • the string of preset length is preset to one letter, two letters or three letters, that is, one character, two characters or three characters are preset.
  • the abbreviation of microbial entity is generally 1-3 The letters are formed in order, but the formed letters are random. Therefore, in this solution, all possible situations are enumerated to generate all entity abbreviations that may correspond to the target entity.
  • Microbial abbreviations are generally composed of 1-3 letters in the full name in order, and a ".” is added at the end. Therefore, in the above embodiment, the preset character is ".”, which is the serialized character Add preset characters at the end of the string to obtain all candidate abbreviation entities.
  • the first model is adopted from The candidate abbreviated entities are screened to obtain a candidate abbreviated entity corresponding to the target entity as the target abbreviated entity. Refer to FIG. 3, including the following.
  • S321 Obtain any candidate abbreviated entity, input the candidate abbreviated entity and the target entity into the CharCNN network at the same time, and obtain a first vector and a second vector respectively corresponding to the candidate abbreviated entity and the target entity.
  • the first model includes two inputs, one is any candidate abbreviated entity, the other is the target entity, and the output is "yes" or “no", which is used to indicate whether the input candidate abbreviated entity matches the target entity
  • the CharCNN network is used to process the candidate abbreviation entity and the target entity simultaneously, and two CharCNN networks with the same structure are set up.
  • the CharCNN network is a character-level convolutional neural network for extracting the candidate abbreviations separately The font structure of the entity and the target entity.
  • the above-mentioned fully-connected layer is used to implement the two-category output as "yes" or "no".
  • the candidate abbreviated entity does not match the target entity, and another candidate abbreviated entity needs to be replaced to repeat the above S321-S322 to perform the judgment again.
  • the candidate abbreviated entity matches the target entity.
  • the first model is trained before being used as the target abbreviated entity. Refer to FIG. 4, including the following .
  • S331 Obtain training samples, where the training samples include multiple sample full name-abbreviation pairs, and each sample full name-abbreviation pair corresponds to a sample label.
  • the training samples can be artificially generated, including reasonable and unreasonable full name-abbreviation pairs, and label each full name-abbreviation pair label, or it can be directly obtained from the database, or it can be generated independently by the model .
  • S332 Obtain a sample full name-abbreviation pair, input the sample full name and the sample abbreviation into the CharCNN network at the same time, and obtain the first processing vector and the second processing vector corresponding to the candidate abbreviated entity and the target entity respectively.
  • S334 Compare the sample judgment result with the sample label, adjust the first model until the training is completed, and obtain the trained first model.
  • Step S332 of the above training process is the same as the above step S333 and steps S321-S322 in the process.
  • the training samples are used to make the first model perform autonomous learning.
  • the processing procedure of S321-S333 overcomes the situation that the abbreviation cannot be considered based on dictionary matching in the prior art.
  • the candidate abbreviated entity is generated through steps S311-S312, and the first model is used to filter in steps S321-S323 to obtain the matching target entity.
  • the target abbreviated entity can complete the extraction of the abbreviated entity autonomously, which further improves the accuracy of the entity extraction result.
  • S400 Generate multiple candidate supplementary entities according to a second preset rule and the target entity, and use a second model to screen the candidate supplementary entities to obtain a target supplementary entity.
  • the candidate supplementary entity is obtained by expanding the boundary of the target entity. Since some microorganisms contain specific information (such as bacterial species information, as an example, "actobacillus AU513B"), the candidate supplementary entity is obtained through the above-mentioned candidate supplementary entity. Ways to obtain entities that may contain specific information as candidate supplementary entities, and then judge the candidate supplementary entities.
  • specific information such as bacterial species information, as an example, "actobacillus AU513B”
  • a plurality of candidate supplementary entities are generated according to the second preset rule and the target entity, referring to FIG. 5, including the following.
  • S411 Obtain a target entity, and determine whether the position of the target entity is at the end of the sentence.
  • the candidate supplementary entity is an extension of the target entity.
  • the microbial entity with specific information is arranged in sequence. Therefore, it needs to extend back according to the position of the target entity. Already at the end of the sentence, it means that it cannot be extended backwards, and there is no candidate supplementary entity. If the target entity is located in the middle of the sentence or at the head of the sentence, there may be a combination of adjacent words with specific information. entity.
  • Lactobacillus AU513B can cause pneumonia
  • Lactobacillus is extracted according to the previous steps, and it is judged whether Lactobacillus is at the end of the sentence. If Lactobacillus is already at the end of the sentence, we think that there will be no more strain information. Lactobacillus does any expansion. If Lactobacillus is not the end of the sentence, include a word after Lactobacillus into the candidate supplementary entity, and you will get "Lactobacillus AU513B". Then judge "Lactobacillus” according to the following steps Is AU513B" a target supplementary entity (that is, whether it is a reasonable microbial entity).
  • the candidate supplementary entity is obtained according to the position of the target entity in the sentence, but the candidate supplementary entity may not be an entity with specific information consistent with the target entity. Therefore, the candidate supplementary entity needs to be supplemented one by one.
  • the entity makes the judgment.
  • the second model is used to screen the candidate supplementary entities to obtain the target supplementary entity. Referring to FIG. 6, the following steps are included.
  • S421 Obtain any candidate supplementary entity, and use the CharCNN layer to process the candidate supplementary entity to obtain a feature vector corresponding to the candidate supplementary entity.
  • the CharCNN layer captures the glyph feature of the string and converts it into a "glyph vector". For example, many strain texts are characterized by a combination of uppercase letters and numbers.
  • the CharCNN layer includes a character encoding layer and a convolution-pooling layer. Because the input of the model is the one-hot representation vector of the characters, the character encoding and convolution are required first.
  • the pooling layer is composed of 9 layers of neural networks with 6 convolutional layers and 3 fully connected layers. Two dropout layers are added between the three fully connected layers to achieve model regularization.
  • the CharCNN layer can be used to identify the The characteristics of the candidate supplementary entity.
  • S422 Synchronously use the position coding layer to process the candidate supplementary entity to obtain a position vector corresponding to the candidate supplementary entity.
  • the second model includes two inputs, one is the candidate supplementary entity (that is, to obtain the feature vector), and the other is to extend the range of the string (that is, to obtain the position vector), as an example, "Lactobacillus
  • the position of the string 0-12 in "AU513B” is the result of the original dictionary extraction (ie Lactobacillus), and the string 14-19 is the result of our expansion (ie AU513B), so we take the two numbers 12 and 14 as the second input .
  • the position coding layer obtains the position information of the candidate supplementary entity, and specifically, converts the position information into a vector according to a preset rule. More specifically, the synchronously using the position coding layer to process the candidate supplementary entity in step S422 to obtain the position vector corresponding to the candidate supplementary entity includes the following.
  • S422-1 Obtain the candidate supplementary entity, and calculate length data of the candidate supplementary entity.
  • S422-2 Establish a target vector according to the length data and a third preset rule, as a position vector corresponding to the candidate supplementary entity.
  • the third preset rule is that in the target vector, the position value corresponding to the target entity character string is 1, and the position value corresponding to the extended character string (that is, the character string corresponding to the next word next to the target entity) is 0, and the middle The value of the blank part is 2.
  • S423 Combine the feature vector and the position vector and input the fully connected layer for processing to obtain a classification result.
  • the classification result includes "Yes” or "No".
  • the judgment result is yes, it means that the extended string matches the target entity, that is, the candidate supplementary entity is a target entity with specific information. If the judgment result is no, it means that the candidate supplementary entity does not match the target entity, and there is no extended entity here.
  • the second model is trained, referring to FIG. 7, including the following.
  • S431 Obtain training samples, where the samples include multiple sample entities, the sample entities correspond to multiple associated entities, and each associated entity includes a sample label.
  • the sample entity is Lactobacillus
  • the associated entity is "Lactobacillus AU513B" (the corresponding sample label is Yes, which is a reasonable sample supplement entity corresponding to the sample entity), and "Lactobacillus can” (the corresponding sample label is No, That is, the unreasonable sample supplementary entity corresponding to the sample entity).
  • S432 Obtain any associated entity based on the training sample, and use the CharCNN layer to process the associated entity to obtain a first vector.
  • S433 Synchronously use the position coding layer to process the associated entity to obtain a second vector.
  • S434 Combine the first vector and the second vector and input the fully connected layer for processing to obtain a sample classification result.
  • Steps S432-S434 in the above training process are the same as those in the processing process, and will not be repeated here.
  • S435 The sample classification result is compared with the sample label corresponding to the associated entity, and the parameters of the second model are adjusted until the training is completed, and the trained second model is obtained.
  • the second model of the training sample learns the reasonable feature vector and position vector of the microorganism reference, so as to learn to judge any input and improve the accuracy of the obtained target supplementary entity.
  • S500 Generate target data based on the target entity, the target abbreviated entity, and the target supplementary entity.
  • the target entity, target abbreviated entity, and target supplementary entity are combined to obtain the final target data.
  • the abbreviated entity and supplementary entity extended entity
  • Entity extraction based on dictionary matching microorganisms cannot consider abbreviations or entities with specific information, which is a technical problem with low accuracy.
  • the full name of the microbial entity is obtained from the text to be processed through dictionary matching, and then the candidate abbreviated entity is generated according to the first preset rule, and the first model is used to judge the degree of matching of each candidate abbreviated entity with the target entity , And obtain the target abbreviated entity data, realize the extraction of the abbreviated data corresponding to the microbial entity in the process of extracting the microbial entity based on the medical text, and then obtain the candidate supplementary entity according to the second preset rule, and use the second model to supplement each candidate
  • the above-mentioned target entity, target abbreviated entity, and target supplementary entity can be uploaded to the blockchain for subsequent use as reference samples or training samples. Uploading to the blockchain can ensure its security and fairness and transparency to users.
  • the summary information is downloaded from the block chain to verify whether the priority list has been tampered with, and the voice file corresponding to the amount of data can also be downloaded from the block chain for voice broadcast, without the need for a generation process, which effectively improves the efficiency of voice processing.
  • the blockchain referred to in this application is a new application mode of computer technology such as distributed data storage, point-to-point transmission, consensus mechanism, and encryption algorithm.
  • Blockchain essentially a decentralized database, is a series of data blocks associated with cryptographic methods. Each data block contains a batch of network transaction information for verification. The validity of the information (anti-counterfeiting) and the generation of the next block.
  • the blockchain can include the underlying platform of the blockchain, the platform product service layer, and the application service layer.
  • a named entity recognition device 6 of this embodiment includes: an acquisition module 61, an extraction module 62, a first processing module 63, a second processing module 64, and a generating module 65.
  • the obtaining module 61 is configured to obtain medical text, preprocess the medical text, and obtain the text to be processed.
  • the extraction module 62 is configured to extract microbial entities from the text to be processed based on a preset dictionary to obtain a target entity.
  • the first processing module 63 is configured to generate multiple candidate abbreviation entities according to the first preset rule and the target entity, and use the first model to screen the candidate abbreviation entities to obtain candidate abbreviations corresponding to the target entity Entity, as the target abbreviated entity.
  • the first processing module 63 includes the following.
  • the first processing unit 631 is configured to obtain a target entity, extract a string of a preset length according to the entity; serialize the string and add a preset character to obtain a candidate abbreviated entity corresponding to the target entity .
  • the second processing unit 632 is configured to obtain any candidate abbreviated entity, input the candidate abbreviated entity and the target entity into the CharCNN network at the same time, and obtain a first vector and a second vector corresponding to the candidate abbreviated entity and the target entity respectively.
  • Vector after the first vector and the second vector are spliced, the fully connected layer is used for classification processing to obtain the judgment result; when the judgment result is no, another candidate abbreviated entity is obtained; when the judgment result is If yes, acquire the candidate abbreviated entity as the target abbreviated entity.
  • the second processing module 64 is configured to generate multiple candidate supplementary entities according to the second preset rule and the target entity, and use the second model to screen the candidate supplementary entities to obtain the target supplementary entity.
  • the second processing module 64 includes: a third processing unit 641, configured to obtain a target entity and determine whether the position of the target entity is at the end of the sentence; when the position of the target entity is not at the end of the sentence , Acquiring the next word adjacent to the target entity, and splicing the target entity with the next word adjacent to it as a candidate supplementary entity.
  • a third processing unit 641 configured to obtain a target entity and determine whether the position of the target entity is at the end of the sentence; when the position of the target entity is not at the end of the sentence , Acquiring the next word adjacent to the target entity, and splicing the target entity with the next word adjacent to it as a candidate supplementary entity.
  • the fourth processing unit 642 is configured to obtain any candidate supplementary entity, use the CharCNN layer to process the candidate supplementary entity, and obtain a feature vector corresponding to the candidate supplementary entity; synchronously use the position coding layer to perform the candidate supplementary entity Perform processing to obtain the position vector corresponding to the candidate supplementary entity; join the feature vector and the position vector and input the fully connected layer for processing to obtain the classification result; when the classification result is no, obtain another candidate Supplementary entity; when the classification result is yes, then obtain the candidate supplementary entity as a target candidate supplementary entity.
  • the generating module 65 is configured to generate target data based on the target entity, the target abbreviated entity, and the target supplementary entity.
  • the medical text is obtained through the acquisition module and preprocessed to reduce the impact on the subsequent entity extraction process.
  • the extraction module is used to perform the entity based on the preprocessed medical text and the preset dictionary. Extract, obtain the target entity consistent with the preset dictionary, and then use the first processing module to generate multiple candidate abbreviated entities and use the first model to screen out the target entity consistent with the target entity as the target abbreviated entity, and then use the second processing module to generate multiple candidate abbreviated entities.
  • the second model is used to filter the candidate supplementary entities to obtain the target supplementary entity.
  • the target entity, the target abbreviated entity, and the target supplementary entity are set to generate the target data by generating the target, which is different from the prior art that only uses the dictionary matching method for entity extraction , Solve the technical problem that the existing entity extraction based on dictionary-matching microorganisms cannot consider abbreviations or entities with specific information, so that the accuracy rate is low.
  • the first processing unit during the processing of the first processing module, the first processing unit generates candidate abbreviated entities based on preset rules and target entities, and then screens each candidate abbreviated entity, which is easy to implement and has a higher accuracy rate of results.
  • the third processing unit generates candidate supplementary entities based on preset rules and target entities, and then judges each candidate supplementary entity to further improve the accuracy of the extraction results, and further reduce the abbreviated entities and associated entities in the extraction process.
  • the omission of entities of specific information ensures the complete extraction of all entities from medical texts, and improves the completeness and comprehensiveness of entity extraction from medical texts.
  • the present application also provides a computer device 7, which may include multiple computer devices.
  • the components of the named entity recognition apparatus 6 in the second embodiment can be dispersed in different computer devices 7.
  • the computer device 7 It can be a smartphone, tablet, laptop, desktop computer, rack server, blade server, tower server, or rack server (including independent servers, or server clusters composed of multiple servers) that execute the program, etc. .
  • the computer device of this embodiment at least includes, but is not limited to: a memory, a processor, and a computer program stored in the memory and capable of running on the processor.
  • the processor implements part or all of the above methods when the computer program is executed. step.
  • the computer equipment may also include a network interface and/or a named entity recognition device.
  • FIG. 9 only shows a computer device with components, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
  • the memory 71 includes at least one type of computer-readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), random access memory ( RAM), static random access memory (SRAM), read-only memory (ROM), electrically erasable programmable read-only memory (EEPROM), programmable read-only memory (PROM), magnetic memory, magnetic disks, optical disks, etc.
  • the memory 71 may be an internal storage unit of a computer device, such as a hard disk or memory of the computer device.
  • the memory 71 may also be an external storage device of the computer device, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), and a secure digital (Secure Digital, SD) equipped on the computer device. Flash memory card Card) and so on.
  • the memory 71 may also include both an internal storage unit of the computer device and an external storage device thereof.
  • the memory 71 is generally used to store an operating system and various application software installed in a computer device, such as the program code of the named entity recognition apparatus 6 in the first embodiment, and so on.
  • the memory 71 can also be used to temporarily store various types of data that have been output or will be output.
  • the processor 72 may be a central processing unit (Central Processing Unit, CPU), a controller, a microcontroller, a microprocessor, or other data processing chips in some embodiments.
  • the processor 72 is generally used to control the overall operation of the computer equipment.
  • the processor 72 is configured to run the program code or process data stored in the memory 71, for example, to run a named entity recognition device, so as to implement the named entity recognition method of the first embodiment.
  • the network interface 73 may include a wireless network interface or a wired network interface, and the network interface 73 is usually used to establish a communication connection between the computer device 7 and other computer devices 7.
  • the network interface 73 is used to connect the computer device 7 with an external terminal through a network, and establish a data transmission channel and a communication connection between the computer device 7 and the external terminal.
  • the network may be an intranet (Intranet), the Internet (Internet), a global system of mobile communication (GSM), and wideband code division multiple access (Wideband Code). Division Multiple Access, WCDMA), 4G network, 5G network, Bluetooth (Bluetooth), Wi-Fi and other wireless or wired networks.
  • FIG. 9 only shows the computer device 7 with components 71-73, but it should be understood that it is not required to implement all the components shown, and more or fewer components may be implemented instead.
  • the named entity recognition device 6 stored in the memory 71 may also be divided into one or more program modules.
  • the one or more program modules are stored in the memory 71 and are composed of one or more program modules.
  • Multiple processors are executed to complete the application.
  • this application also provides a computer-readable storage system (computer-readable storage medium), which includes multiple storage media, such as flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.) ), random access memory (RAM), static random access memory (SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk , CD-ROM, server, App application mall, etc., on which computer programs are stored, and when the programs are executed by the processor 72, corresponding functions are realized.
  • the computer-readable storage medium of this embodiment is used to store a named entity recognition device, and when executed by the processor 72, the named entity recognition method of the first embodiment is implemented.
  • the storage medium involved in this application may be non-volatile or volatile.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Machine Translation (AREA)

Abstract

命名实体识别方法、装置、计算机设备及可读存储介质,方法包括:获取医学文本,对医学文本进行预处理,获得待处理文本(S100);基于预设词典对待处理文本进行微生物实体抽取,获得目标实体(S200);根据第一预设规则和目标实体生成多个候选缩写实体,并采用第一模型从候选缩写实体中筛选,获得与实体对应的候选缩写实体,作为目标缩写实体(S300);根据第二预设规则和目标实体生成多个候选补充实体,并采用第二模型对候选补充实体进行筛选,获得目标补充实体(S400);基于目标实体、目标缩写实体以及目标补充实体生成目标数据(S500)。解决了基于字典匹配的实体抽取方法无法考虑缩写或带有特定信息的实体,准确率较低的技术问题。

Description

命名实体识别方法、装置、计算机设备及可读存储介质
本申请要求于2020年10月20日提交中国专利局、申请号为202011123404.1,发明名称为“命名实体识别方法、装置、计算机设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自然语言处理技术领域,尤其涉及一种命名实体识别方法、装置、计算机设备及可读存储介质。
背景技术
随着电子信息技术的发展,在医学领域中,通过对医学知识的归纳整理形成了医学知识图谱,微生物信息在一些病毒感染、细菌感染的医学文献中非常重要,微生物的种类和相关疾病的治疗方式息息相关,例如最普通的肺炎,细菌性肺炎、病毒性肺炎的治疗、诊断方法就有很大的差别,不同种类病毒导致的肺炎差别也很大,因此将微生物信息从医学文献中准确的抽取出来有很高的业务价值。
发明人发现,现有的微生物实体抽取任务中,大多采用基于字典匹配的方式进行抽取,但是现有的抽取过程中微生物在文献中经常会以缩写的形式出现,同时微生物在文献中还会出现特定的菌株信息,而字典中通常都只能识别具有全称的微生物实体,因此导致识别过程中遗漏较多,识别结果准确率较低。
技术问题
本申请的目的是提供一种命名实体识别方法、装置、计算机设备及可读存储介质,用于解决现有基于字典匹配微生物的实体抽取无法考虑缩写或带有特定信息的实体,从而准确率较低的技术问题。
技术解决方案
为实现上述目的,本申请提供一种命名实体识别方法,包括:获取医学文本,对所述医学文本进行预处理,获得待处理文本;基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体;根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
为实现上述目的,本申请还提供一种命名实体识别装置,包括:获取模块,用于获取医学文本,对所述医学文本进行预处理,获得待处理文本;抽取模块,用于基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;第一处理模块,用于根据第一预设规则和所述实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体;第二处理模块,用于根据第二预设规则和所述实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;生成模块,用于基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
为实现上述目的,本申请还提供一种计算机设备,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以下方法:获取医学文本,对所述医学文本进行预处理,获得待处理文本;基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体;根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
为实现上述目的,本申请还提供一种计算机可读存储介质,其包括多个存储介质,各存储介质上存储有计算机程序,所述多个存储介质存储的所述计算机程序被处理器执行时共同实现以下方法:获取医学文本,对所述医学文本进行预处理,获得待处理文本;基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体;根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
有益效果
本申请先通过词典匹配的方式获取目标实体(即全称实体),然后生成候选缩写实体并采用第一模型判断获得目标缩写实体数据,实现基于医学文本抽取实体过程中对于缩写实体的提取,而后又根据生成候选补充实体并采用第二模型对各个候选补充实体进行判断,实现基于医学文本抽取实体过程中对部分包含特定信息(编号、菌株种类等)实体的提取,最后将所有实体集合,解决现有基于字典匹配微生物的实体抽取无法考虑缩写或带有特定信息的实体,从而准确率较低的技术问题。
附图说明
图1为本申请所述命名实体识别方法实施例一的流程图。
图2为本申请所述命名实体识别方法实施例一中所述根据第一预设规则和所述目标实体生成多个候选缩写实体的流程图。
图3为本申请所述命名实体识别方法实施例一中采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体的流程图。
图4为本申请所述命名实体识别方法实施例一中在采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体前,对所述第一模型进行训练的流程图。
图5为本申请所述命名实体识别方法实施例一中根据第二预设规则和所述目标实体生成多个候选补充实体的流程图。
图6为本申请所述命名实体识别方法实施例一中采用第二模型对所述候选补充实体进行筛选,获得目标补充实体的流程图。
图7为本申请所述命名实体识别方法实施例一中在采用第二模型对所述候选补充实体进行筛选,获得目标补充实体前,对所述第二模型进行训练的流程图。
图8为本申请所述命名实体识别装置实施例二的程序模块示意图。
图9为本申请计算机设备实施例三中计算机设备的硬件结构示意图。
附图标记:6、命名实体识别装置    61、获取模块62、抽取模块63、第一处理模块631、第一处理单元632、第二处理单元641、第三处理单元642、第四处理单元7、计算机设备 71、存储器 72、处理器73、网络接口。
本发明的实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本申请的技术方案可应用于人工智能、智慧城市、数字医疗、区块链和/或大数据技术领域。可选的,本申请涉及的数据如医学文本、实体和/或目标数据等可存储于数据库中,或者可以存储于区块链中,比如通过区块链分布式存储,本申请不做限定。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
本申请提供的命名实体识别方法、装置、计算机设备及可读存储介质,适用于领域,为提供一种基于获取模块、抽取模块、第一处理模块、第二处理模块和生成模块的命名实体识别方法。本申请通过获取模块获取医学文本并进行预处理(具体如归一化和词形还原操作以及消除单复数、时态等),采用抽取模块基于预处理后的医学文本和预设词典进行实体抽取,获得与预设词典中一致的目标实体,而后区别于现有技术的,采用第一处理模块生成多个候选缩写实体并采用第一模型筛选出与目标实体一致的作为目标缩写实体,同时采用第二处理模块生成多个候选补充实体并采用第二模型筛选获得目标补充实体,最终通过生成模块将目标实体、目标缩写实体以及目标补充实体集合获得目标数据,通过目标缩写实体以及目标补充实体的自主抽取,解决现有基于字典匹配微生物的实体抽取无法考虑缩写或带有特定信息的实体,从而准确率较低的技术问题。
实施例一。
请参阅图1,本实施例的一种命名实体识别方法,应用于服务器端,本申请可应用于智慧医疗场景中,从而推动智慧城市的建设,包括以下步骤。
S100:获取医学文本,对所述医学文本进行预处理,获得待处理文本。
具体的,本方案中对所述医学文本进行预处理包括但不限于归一化和词形还原操作以及消除单复数、时态等,具体的,归一化可以把数据映射到0~1范围之内处理,还可将便于不同单位或量级的指标能够进行比较和加权;词形还原就是去掉单词的词缀,提取单词的主干部分;消除单复数、时态等可以通过移除无意义词,例如to, a,the等,其他用于自然语言预处理的技术手段也可用于此,以减少对后续步骤S200中基于预设词典抽取的干扰。
S200:基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体。
具体的,基于预设词典对文本进行抽取,即直接筛选出文本中与预设词典中对应的实体数据,作为举例的,“Lactobacillus AU513B can cause pneumonia。In this study we learn the effect of aspirin on Lb. AU513B”,如果词典中只有“Lactobacillus”这个词,那么本步骤中只抽取“Lactobacillus”,需要说明的是,该处由词典直接获取,因此可直接获取实体全称,而无法获得实体缩写。
S300:根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体。
上述第一模型包括两个并行处理的Char CNN网络,一个用于接收实体数据,一个用于逐个输入候选缩写实体数据,在两个Char CNN网络后连接全连接层用于判断输入的候选缩写实体是否为目标实体的缩写。
具体的,所述根据第一预设规则和所述目标实体生成多个候选缩写实体,参阅图2,包括以下。
S311:获取目标实体,根据所述目标实体提取预设长度的字符串。
在本方案中,预设长度的字符串为预设一个字母、两个字母或三个字母,即预设一个字符、两个字符或三个字符,微生物实体缩写一般由全称中的1-3个字母按顺序构成,但是其构成的字母具有随机性,因此本方案中对于所有可能的情况进行枚举,生成所有可能与所述目标实体对应的实体缩写。
S312:对所述字符串进行序列化处理后添加预设字符,获得与所述目标实体对应的候选缩写实体。
微生物缩写指称一般由全称中的1-3个字母按顺序构成,还在最后在加一个“.”,因此在上述实施方式中,所述预设字符为“.”,在序列化后的字符串尾部添加预设字符,即可获得所有候选缩写实体。
通过上述S311和S312实现对可能与所述目标实体对应的候选缩写实体的列举,需要对上述所有候选缩写实体进行筛选,获得与所述目标实体对应的缩写实体,具体的,采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体,参阅图3,包括以下。
S321:获取任一候选缩写实体,将所述候选缩写实体与所述目标实体同时输入CharCNN网络,获得分别与候选缩写实体与所述目标实体对应的第一向量和第二向量。
所述第一模型包括两个输入,一个为任一候选缩写实体,另一个为目标实体,输出为“是”或“否”,用于表示输入的候选缩写实体是否是与所述目标实体匹配一致,采用CharCNN网络分别对所述候选缩写实体与所述目标实体进行处理过程同步进行,设置两个结构一致的CharCNN网络,CharCNN网络为字符级卷积神经网络,用于分别提取所述候选缩写实体与所述目标实体的字形结构。
S322:将所述第一向量与所述第二向量拼接后采用全连接层进行分类处理,获取判断结果。
上述全连接层用于实现输出为“是”或“否”的二分类。
S323:当所述判断结果为否,则获取另一候选缩写实体。
当判断结果为否,则该候选缩写实体与所述目标实体不匹配,则需要更换另一候选缩写实体重复上述S321-S322再次进行判断。
S324:当所述判断结果为是,则获取所述候选缩写实体作为目标缩写实体。
当判断结果为是,则该候选缩写实体与所述目标实体匹配。
具体的,在采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体前,对所述第一模型进行训练,参阅图4,包括以下。
S331:获取训练样本,所述训练样本包括多个样本全称-缩写对,每一所述样本全称-缩写对均对应一样本标签。
在上述步骤中,所述训练样本可以是人工生成,包括合理与不合理的全称-缩写对,并对每一标注全称-缩写对标签,也可以从数据库中直接获取,还可以利用模型自主生成。
S332:获取一样本全称-缩写对,将所述样本全称与所述样本缩写同时输入CharCNN网络,获得分别与候选缩写实体与所述目标实体对应的第一处理向量和第二处理向量。
S333:将所述第一处理向量与所述第二处理向量拼接后采用全连接层进行分类处理,获取样本判断结果。
S334:将所述样本判断结果与所述样本标签对比,调整第一模型,直至完成训练,获得训练后的第一模型。
上述训练过程的步骤S332与上述步骤S333与处理过程中步骤S321-S322一致,采用训练样本使第一模型进行自主学习如何提取字形特征,以及如何根据特征进行分类,在完成训练后用于上述步骤S321-S333的处理过程,克服了现有技术中基于词典匹配而无法考虑缩写的情况,通过步骤S311-S312生成候选缩写实体,以及步骤S321-S323中采用第一模型筛选获得与目标实体匹配的目标缩写实体,自主完成缩写实体的提取,进一步提高了实体提取结果的准确性。
S400:根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体。
在本实施方式中,所述候选补充实体为通过对目标实体进行扩展边界获得,由于部分微生物包含特定信息(如菌种信息,作为举例的,“actobacillus AU513B”),因此通过上述候补补充实体的方式获取可能包含特定信息的实体作为候选补充实体,而后再对候选补充实体进行判断。
具体的,根据第二预设规则和所述目标实体生成多个候选补充实体,参阅图5,包括以下。
S411:获取目标实体,判断所述目标实体的位置是否处于所在语句句尾。
如前所述,所述候选补充实体是对所述目标实体的延伸,一般带有特定信息的微生物实体,实体与特定信息依次排列,因此需要根据目标实体的位置向后侧延伸,若目标实体已经位于语句句尾,则说明无法再向后延伸,则不存在候选补充实体,若目标实体位于语句中间或语句头部,则有可能有相邻后一侧的词组合为带有特定信息的实体。
S412:当所述目标实体位置未处于所在语句句尾,获取与所述目标实体相邻的后一个词,将所述目标实体与其相邻的后一个词拼接作为候选补充实体。
作为举例的,以“Lactobacillus AU513B can cause pneumonia”为例,假设根据前述步骤抽取出了Lactobacillus,判断Lactobacillus是不是位于该句子的末尾,如果Lactobacillus已经在句子的末尾,我们就认为后面不会再有菌株信息了。因此无需对Lactobacillus做任何扩展,如果Lactobacillus不是句子的末尾,将Lactobacillus后面的一个单词纳入到候选补充实体中,即得到“Lactobacillus AU513B”。而后根据后述步骤判断“Lactobacillus AU513B”是不是一个目标补充实体(即是否为合理的微生物实体)。
根据前述步骤S411-S412根据所述目标实体在所在语句的位置获得候选补充实体,但是候选补充实体也可能不是与所述目标实体一致的带有特定信息的实体,因此需要逐个对所述候选补充实体进行判断,具体的,采用第二模型对所述候选补充实体进行筛选,获得目标补充实体,参阅图6,包括以下步骤。
S421:获取任一候选补充实体,采用CharCNN层对所述候选补充实体进行处理,获得与所述候选补充实体对应的特征向量。
上述步骤中CharCNN层抓取字符串的字形特征,并将其转化为“字形向量”。比如菌株文本很多是大写字母加数字组合这样的特征,CharCNN层包括字符编码层和卷积-池化层,因为模型的输入是字符的one-hot表示向量,所以先得有字符编码,卷积-池化层由6个卷积层和3个全连接层共9层神经网络组成,在三个全连接层之间加入两个dropout层以实现模型正则化,通过CharCNN层可用于识别所述候选补充实体的特征。
S422:同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量。
基于上述步骤S421-S422,所述第二模型包括两个输入,一个是候选补充实体(即获得特征向量),另一个是扩展字符串的范围(即获得位置向量),作为举例的,“Lactobacillus AU513B”中字符串0-12的位置是原始字典抽取的结果(即Lactobacillus),字符串14-19是我们扩展的结果(即AU513B),因此我们把12、14两个数字作为第二个输入。
所述位置编码层获取所述候选补充实体的位置信息,具体的,按照预设的规则将位置信息转化为向量。更具体的,上述步骤S422所述同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量,包括以下。
S422-1:获取所述候选补充实体,计算所述候选补充实体的长度数据。
在上述步骤中,为了实现目标向量的建立,需要与候选补充实体长度保持一致,这样根据步骤S422-2中第三预设规则对不同位置的字符串进行不同的标记,即可区分目标实体与扩展字符串。
S422-2:根据所述长度数据和第三预设规则建立目标向量,作为与所述候选补充实体对应的位置向量。
所述第三预设规则为在目标向量中,目标实体字符串对应的位置值为1,扩展字符串(即所述目标实体相邻后一个字对应的字符串)对应的位置为0,中间空白部分对于的值为2。
作为举例而非限定的,对于“Lactobacillus AU513B”,会生成一个长度为20的向量,在向量中,原始字符串对应的位置值为1,扩展字符串对应的位置为0,中间空白部分对于的值为2。对于“Lactobacillus AU513B”生成的向量为“1111111111112000000”。
S423:将所述特征向量和所述位置向量拼接后输入全连接层处理,获得分类结果。
具体的,所述分类结果包括“是”或“否”,当判断结果是,则说明上说扩展字符串与所述目标实体匹配,即该候选补充实体为带有特定信息的目标实体,当判断结果为否,则说明该候选补充实体与目标实体不匹配,此处不存在扩展的实体。
S424:当所述分类结果为是,则获取所述候选补充实体作为目标候选补充实体。
S425:当所述分类结果为否,则获取另一候选补充实体。
在采用第二模型对所述候选补充实体进行筛选,获得目标补充实体前,对所述第二模型进行训练,参阅图7,包括以下。
S431:获取训练样本,所述样本包括多个样本实体,所述样本实体对应多个关联实体,每一关联实体包含样本标签。
作为举例的,样本实体为Lactobacillus,关联实体为 “Lactobacillus AU513B”(其对应样本标签为是,即为与样本实体对应的合理的样本补充实体)、“Lactobacillus can” (其对应样本标签为否,即为与样本实体对应的不合理的样本补充实体)。
S432:基于所述训练样本获取任一关联实体,采用CharCNN层对所述关联实体进行处理,获得第一向量。
S433:同步采用位置编码层对所述关联实体进行处理,获得第二向量。
S434:将所述第一向量和所述第二向量拼接后输入全连接层处理,获得样本分类结果。
上述训练过程中步骤S432-S434与处理过程中一致,在此不作赘述。
S435:将所述样本分类结果与所述关联实体对应的样本标签进行对比,调整所述第二模型的参数,直至完成训练,获得训练后的第二模型。
通过训练样本第二模型学习合理的微生物指称的特征向量和位置向量,从而学会对任意输入进行判断,提高获得的目标补充实体的准确性。
S500:基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
在上述实施方式中,将目标实体、目标缩写实体以及目标补充实体合并获得最终的目标数据,相较于现有的根据词典匹配的方式增加了缩写实体和补充实体(扩展实体),解决现有基于字典匹配微生物的实体抽取无法考虑缩写或带有特定信息的实体,从而准确率较低的技术问题。
本方案中先通过词典匹配的方式从待处理文本中获取微生物实体全称,然后根据第一预设规则生成候选缩写实体,采用第一模型对各个候选缩写实体与所述目标实体的匹配程度进行判断,并获得目标缩写实体数据,实现基于医学文本抽取微生物实体过程中对于与微生物实体对应的缩写数据的提取,而后又根据第二预设规则获取候选补充实体,同时采用第二模型对各个候选补充实体进行判断,实现基于医学文本抽取微生物实体过程中对部分包含特定信息(编号、菌株种类等)微生物实体数据的提取,进一步完善对医学文本中实体抽取的完整性和全面性。
上述目标实体、目标缩写实体以及目标补充实体可上传至区块链以便于后续作为参考样本或训练样本,上传至区块链可保证其安全性和对用户的公正透明性,用户设备可以从区块链中下载得该摘要信息,以便查证优先级列表是否被篡改,后续也可以从区块链中下载获得对应金额数据的语音文件用于语音播报,无需生成过程,有效提高语音处理效率。
本申请所指区块链是分布式数据存储、点对点传输、共识机制、加密算法等计算机技术的新型应用模式。区块链(Blockchain),本质上是一个去中心化的数据库,是一串使用密码学方法相关联产生的数据块,每一个数据块中包含了一批次网络交易的信息,用于验证其信息的有效性(防伪)和生成下一个区块。区块链可以包括区块链底层平台、平台产品服务层以及应用服务层等。
实施例二。
请参阅图8,本实施例的一种命名实体识别装置6,包括:获取模块61、抽取模块62、第一处理模块63、第二处理模块64以及生成模块65。
获取模块61,用于获取医学文本,对所述医学文本进行预处理,获得待处理文本。
抽取模块62,用于基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体。
第一处理模块63,用于根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体。
优选的,所述第一处理模块63包括以下。
第一处理单元631,用于获取目标实体,根据所述实体提取预设长度的字符串;对所述字符串进行序列化处理后添加预设字符,获得与所述目标实体对应的候选缩写实体。
第二处理单元632,用于获取任一候选缩写实体,将所述候选缩写实体与所述目标实体同时输入CharCNN网络,获得分别与候选缩写实体与所述目标实体对应的第一向量和第二向量;将所述第一向量与所述第二向量拼接后采用全连接层进行分类处理,获取判断结果;当所述判断结果为否,则获取另一候选缩写实体;当所述判断结果为是,则获取所述候选缩写实体作为目标缩写实体。
第二处理模块64,用于根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体。
优选的,所述第二处理模块64包括:第三处理单元641,用于获取目标实体,判断所述目标实体的位置是否处于所在语句句尾;当所述目标实体位置未处于所在语句句尾,获取与所述目标实体相邻的后一个词,将所述目标实体与其相邻的后一个词拼接作为候选补充实体。第四处理单元642,用于获取任一候选补充实体,采用CharCNN层对所述候选补充实体进行处理,获得与所述候选补充实体对应的特征向量;同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量;将所述特征向量和所述位置向量拼接后输入全连接层处理,获得分类结果;当所述分类结果为否,则获取另一候选补充实体;当所述分类结果为是,则获取所述候选补充实体作为目标候选补充实体。
生成模块65,用于基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
本技术方案基于语音语义中语义解析的自然语言处理,通过获取模块获取医学文本并进行预处理,减少对后续实体抽取过程的影响,采用抽取模块基于预处理后的医学文本和预设词典进行实体抽取,获得与预设词典中一致的目标实体,而后采用第一处理模块生成多个候选缩写实体并采用第一模型筛选出与目标实体一致的作为目标缩写实体,再采用第二处理模块生成多个候选补充实体并采用第二模型筛选获得目标补充实体,最终通过生成目标将目标实体、目标缩写实体以及目标补充实体集合生成目标数据,区别于现有技术中仅采用词典匹配的方式进行实体抽取,解决现有基于字典匹配微生物的实体抽取无法考虑缩写或带有特定信息的实体,从而准确率较低的技术问题。
本方案中在第一处理模块处理过程中,通过第一处理单元基于预设规则和目标实体生成候选缩写实体,再对各个候选缩写实体进行甄别,容易实现且结果准确率较高,在第二处理模块过程中,则通过第三处理单元基于预设规则和目标实体生成候选补充实体,再对各个候选补充实体进行判断,进一步提高抽取结果的准确性,进一步减少抽取过程中缩写实体以及带有特定信息的实体的遗漏,保证从医学文本中完整抽取所有实体,完善对医学文本中实体抽取的完整性和全面性。
实施例三。
为实现上述目的,本申请还提供一种计算机设备7,该计算机设备可包括多个计算机设备,实施例二的命名实体识别装置6的组成部分可分散于不同的计算机设备7中,计算机设备7可以是执行程序的智能手机、平板电脑、笔记本电脑、台式计算机、机架式服务器、刀片式服务器、塔式服务器或机柜式服务器(包括独立的服务器,或者多个服务器所组成的服务器集群)等。本实施例的计算机设备至少包括但不限于:存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现上述方法中的部分或全部步骤。可选的,该计算机设备还可包括网络接口和/或命名实体识别装置。例如,可通过系统总线相互通信连接的存储器71、处理器72、网络接口73以及命名实体识别装置6,如图9所示。需要指出的是,图9仅示出了具有组件-的计算机设备,但是应理解的是,并不要求实施所有示出的组件,可以替代的实施更多或者更少的组件。
本实施例中,存储器71至少包括一种类型的计算机可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等。在一些实施例中,存储器71可以是计算机设备的内部存储单元,例如该计算机设备的硬盘或内存。在另一些实施例中,存储器71也可以是计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。当然,存储器71还可以既包括计算机设备的内部存储单元也包括其外部存储设备。本实施例中,存储器71通常用于存储安装于计算机设备的操作系统和各类应用软件,例如实施例一的命名实体识别装置6的程序代码等。此外,存储器71还可以用于暂时地存储已经输出或者将要输出的各类数据。
处理器72在一些实施例中可以是中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器、或其他数据处理芯片。该处理器72通常用于控制计算机设备的总体操作。本实施例中,处理器72用于运行存储器71中存储的程序代码或者处理数据,例如运行命名实体识别装置,以实现实施例一的命名实体识别方法。
所述网络接口73可包括无线网络接口或有线网络接口,该网络接口73通常用于在所述计算机设备7与其他计算机设备7之间建立通信连接。例如,所述网络接口73用于通过网络将所述计算机设备7与外部终端相连,在所述计算机设备7与外部终端之间的建立数据传输通道和通信连接等。所述网络可以是企业内部网(Intranet)、互联网(Internet)、全球移动通讯系统(Global System of Mobile communication,GSM)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、4G网络、5G网络、蓝牙(Bluetooth)、Wi-Fi等无线或有线网络。
需要指出的是,图9仅示出了具有部件71-73的计算机设备7,但是应理解的是,并不要求实施所有示出的部件,可以替代的实施更多或者更少的部件。
在本实施例中,存储于存储器71中的所述命名实体识别装置6还可以被分割为一个或者多个程序模块,所述一个或者多个程序模块被存储于存储器71中,并由一个或多个处理器(本实施例为处理器72)所执行,以完成本申请。
实施例四。
为实现上述目的,本申请还提供一种计算机可读存储系统(计算机可读存储介质),其包括多个存储介质,如闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘、服务器、App应用商城等等,其上存储有计算机程序,程序被处理器72执行时实现相应功能。本实施例的计算机可读存储介质用于存储命名实体识别装置,被处理器72执行时实现实施例一的命名实体识别方法。
可选的,本申请涉及的存储介质可以是非易失性的,也可以是易失性的。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种命名实体识别方法,其中,包括:
    获取医学文本,对所述医学文本进行预处理,获得待处理文本;
    基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;
    根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体;
    根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;
    基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
  2. 根据权利要求1所述的命名实体识别方法,其中,所述根据第一预设规则和所述目标实体生成多个候选缩写实体,包括以下:
    获取目标实体,根据所述实体提取预设长度的字符串;
    对所述字符串进行序列化处理后添加预设字符,获得与所述目标实体对应的候选缩写实体。
  3. 根据权利要求1所述的命名实体识别方法,其中,采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体,包括以下:
    获取任一候选缩写实体,将所述候选缩写实体与所述目标实体同时输入CharCNN网络,获得分别与候选缩写实体与所述目标实体对应的第一向量和第二向量;
    将所述第一向量与所述第二向量拼接后采用全连接层进行分类处理,获取判断结果;
    当所述判断结果为否,则获取另一候选缩写实体;
    当所述判断结果为是,则获取所述候选缩写实体作为目标缩写实体。
  4. 根据权利要求1所述的命名实体识别方法,其中,根据第二预设规则和所述目标实体生成多个候选补充实体,包括以下:
    获取目标实体,判断所述目标实体的位置是否处于所在语句句尾;
    当所述目标实体位置未处于所在语句句尾,获取与所述目标实体相邻的后一个词,将所述目标实体与其相邻的后一个词拼接作为候选补充实体。
  5. 根据权利要求1所述的命名实体识别方法,其中,采用第二模型对所述候选补充实体进行筛选,获得目标补充实体,包括以下:
    获取任一候选补充实体,采用CharCNN层对所述候选补充实体进行处理,获得与所述候选补充实体对应的特征向量;
    同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量;
    将所述特征向量和所述位置向量拼接后输入全连接层处理,获得分类结果;
    当所述分类结果为否,则获取另一候选补充实体;
    当所述分类结果为是,则获取所述候选补充实体作为目标候选补充实体。
  6. 根据权利要求5所述的命名实体识别方法,其中,所述同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量,包括以下:
    获取所述候选补充实体,计算所述候选补充实体的长度数据;
    根据所述长度数据和第三预设规则建立目标向量,作为与所述候选补充实体对应的位置向量。
  7. 根据权利要求1所述的命名实体识别方法,其中,在采用第二模型对所述候选补充实体进行筛选,获得目标补充实体前,对所述第二模型进行训练,包括以下:
    获取训练样本,所述样本包括多个样本实体,所述样本实体对应多个关联实体,每一关联实体包含样本标签;
    获取任一关联实体,采用CharCNN层对所述关联实体进行处理,获得第一样本向量;
    同步采用位置编码层对所述关联实体进行处理,获得第二样本向量;
    将所述第一样本向量和所述第二样本向量拼接后输入全连接层处理,获得样本分类结果;
    将所述样本分类结果与所述关联实体对应的样本标签进行对比,调整所述第二模型的参数,直至完成训练,获得训练后的第二模型。
  8. 一种命名实体识别装置,其中,包括:
    获取模块,用于获取医学文本,对所述医学文本进行预处理,获得待处理文本;
    抽取模块,用于基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;
    第一处理模块,用于根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体;
    第二处理模块,用于根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;
    生成模块,用于基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
  9. 一种计算机设备,其中,所述计算机设备包括存储器、处理器以及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述计算机程序时实现以下方法:
    获取医学文本,对所述医学文本进行预处理,获得待处理文本;
    基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;
    根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体;
    根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;
    基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
  10. 根据权利要求9所述的计算机设备,其中,所述根据第一预设规则和所述目标实体生成多个候选缩写实体时,具体实现:
    获取目标实体,根据所述实体提取预设长度的字符串;
    对所述字符串进行序列化处理后添加预设字符,获得与所述目标实体对应的候选缩写实体。
  11. 根据权利要求9所述的计算机设备,其中,采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体时,具体实现:
    获取任一候选缩写实体,将所述候选缩写实体与所述目标实体同时输入CharCNN网络,获得分别与候选缩写实体与所述目标实体对应的第一向量和第二向量;
    将所述第一向量与所述第二向量拼接后采用全连接层进行分类处理,获取判断结果;
    当所述判断结果为否,则获取另一候选缩写实体;
    当所述判断结果为是,则获取所述候选缩写实体作为目标缩写实体。
  12. 根据权利要求9所述的计算机设备,其中,根据第二预设规则和所述目标实体生成多个候选补充实体时,具体实现:
    获取目标实体,判断所述目标实体的位置是否处于所在语句句尾;
    当所述目标实体位置未处于所在语句句尾,获取与所述目标实体相邻的后一个词,将所述目标实体与其相邻的后一个词拼接作为候选补充实体。
  13. 根据权利要求9所述的计算机设备,其中,采用第二模型对所述候选补充实体进行筛选,获得目标补充实体时,具体实现:
    获取任一候选补充实体,采用CharCNN层对所述候选补充实体进行处理,获得与所述候选补充实体对应的特征向量;
    同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量;
    将所述特征向量和所述位置向量拼接后输入全连接层处理,获得分类结果;
    当所述分类结果为否,则获取另一候选补充实体;
    当所述分类结果为是,则获取所述候选补充实体作为目标候选补充实体。
  14. 根据权利要求9所述的计算机设备,其中,在采用第二模型对所述候选补充实体进行筛选,获得目标补充实体前,对所述第二模型进行训练时,具体实现:
    获取训练样本,所述样本包括多个样本实体,所述样本实体对应多个关联实体,每一关联实体包含样本标签;
    获取任一关联实体,采用CharCNN层对所述关联实体进行处理,获得第一样本向量;
    同步采用位置编码层对所述关联实体进行处理,获得第二样本向量;
    将所述第一样本向量和所述第二样本向量拼接后输入全连接层处理,获得样本分类结果;
    将所述样本分类结果与所述关联实体对应的样本标签进行对比,调整所述第二模型的参数,直至完成训练,获得训练后的第二模型。
  15. 一种计算机可读存储介质,其包括多个存储介质,各存储介质上存储有计算机程序,其中,所述多个存储介质存储的所述计算机程序被处理器执行时共同实现权以下方法:
    获取医学文本,对所述医学文本进行预处理,获得待处理文本;
    基于预设词典对所述待处理文本进行微生物实体抽取,获得目标实体;
    根据第一预设规则和所述目标实体生成多个候选缩写实体,并采用第一模型从所述候选缩写实体中筛选,获得与所述实体对应的候选缩写实体,作为目标缩写实体;
    根据第二预设规则和所述目标实体生成多个候选补充实体,并采用第二模型对所述候选补充实体进行筛选,获得目标补充实体;
    基于所述目标实体、所述目标缩写实体以及目标补充实体生成目标数据。
  16. 根据权利要求15所述的计算机可读存储介质,其中,所述根据第一预设规则和所述目标实体生成多个候选缩写实体时,具体实现:
    获取目标实体,根据所述实体提取预设长度的字符串;
    对所述字符串进行序列化处理后添加预设字符,获得与所述目标实体对应的候选缩写实体。
  17. 根据权利要求15所述的计算机可读存储介质,其中,采用第一模型从所述候选缩写实体中筛选,获得与所述目标实体对应的候选缩写实体,作为目标缩写实体时,具体实现:
    获取任一候选缩写实体,将所述候选缩写实体与所述目标实体同时输入CharCNN网络,获得分别与候选缩写实体与所述目标实体对应的第一向量和第二向量;
    将所述第一向量与所述第二向量拼接后采用全连接层进行分类处理,获取判断结果;
    当所述判断结果为否,则获取另一候选缩写实体;
    当所述判断结果为是,则获取所述候选缩写实体作为目标缩写实体。
  18. 根据权利要求15所述的计算机可读存储介质,其中,根据第二预设规则和所述目标实体生成多个候选补充实体时,具体实现:
    获取目标实体,判断所述目标实体的位置是否处于所在语句句尾;
    当所述目标实体位置未处于所在语句句尾,获取与所述目标实体相邻的后一个词,将所述目标实体与其相邻的后一个词拼接作为候选补充实体。
  19. 根据权利要求15所述的计算机可读存储介质,其中,采用第二模型对所述候选补充实体进行筛选,获得目标补充实体时,具体实现:
    获取任一候选补充实体,采用CharCNN层对所述候选补充实体进行处理,获得与所述候选补充实体对应的特征向量;
    同步采用位置编码层对所述候选补充实体进行处理,获得与所述候选补充实体对应的位置向量;
    将所述特征向量和所述位置向量拼接后输入全连接层处理,获得分类结果;
    当所述分类结果为否,则获取另一候选补充实体;
    当所述分类结果为是,则获取所述候选补充实体作为目标候选补充实体。
  20. 根据权利要求15所述的计算机可读存储介质,其中,在采用第二模型对所述候选补充实体进行筛选,获得目标补充实体前,对所述第二模型进行训练时,具体实现:
    获取训练样本,所述样本包括多个样本实体,所述样本实体对应多个关联实体,每一关联实体包含样本标签;
    获取任一关联实体,采用CharCNN层对所述关联实体进行处理,获得第一样本向量;
    同步采用位置编码层对所述关联实体进行处理,获得第二样本向量;
    将所述第一样本向量和所述第二样本向量拼接后输入全连接层处理,获得样本分类结果;
    将所述样本分类结果与所述关联实体对应的样本标签进行对比,调整所述第二模型的参数,直至完成训练,获得训练后的第二模型。
PCT/CN2020/134882 2020-10-20 2020-12-09 命名实体识别方法、装置、计算机设备及可读存储介质 WO2021179708A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011123404.1 2020-10-20
CN202011123404.1A CN112257446A (zh) 2020-10-20 2020-10-20 命名实体识别方法、装置、计算机设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2021179708A1 true WO2021179708A1 (zh) 2021-09-16

Family

ID=74243779

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/134882 WO2021179708A1 (zh) 2020-10-20 2020-12-09 命名实体识别方法、装置、计算机设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN112257446A (zh)
WO (1) WO2021179708A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113889281A (zh) * 2021-11-17 2022-01-04 重庆邮电大学 一种中文医疗智能实体识别方法、装置及计算机设备
CN114579709A (zh) * 2022-03-15 2022-06-03 西南交通大学 一种基于知识图谱的智能问答意图识别方法
CN115841113A (zh) * 2023-02-24 2023-03-24 山东云天安全技术有限公司 一种域名标号检测方法、存储介质及电子设备
CN116127960A (zh) * 2023-04-17 2023-05-16 广东粤港澳大湾区国家纳米科技创新研究院 信息抽取方法、装置、存储介质及计算机设备
CN116226114A (zh) * 2023-05-09 2023-06-06 荣耀终端有限公司 一种数据处理方法、系统及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113177412A (zh) * 2021-04-05 2021-07-27 北京智慧星光信息技术有限公司 基于bert的命名实体识别方法、系统、电子设备及存储介质
CN114741508B (zh) * 2022-03-29 2023-05-30 北京三快在线科技有限公司 概念挖掘方法及装置、电子设备及可读存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7403891B2 (en) * 2003-10-23 2008-07-22 Electronics And Telecommunications Research Institute Apparatus and method for recognizing biological named entity from biological literature based on UMLS
CN109614493A (zh) * 2018-12-29 2019-04-12 重庆邂智科技有限公司 一种基于监督词向量的文本缩写识别方法及系统
CN110134965A (zh) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 用于信息处理的方法、装置、设备和计算机可读存储介质
CN110348015A (zh) * 2019-07-12 2019-10-18 北京百奥知信息科技有限公司 一种自动标注医学文本中实体的方法
CN111126040A (zh) * 2019-12-26 2020-05-08 贵州大学 一种基于深度边界组合的生物医学命名实体识别方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10289957B2 (en) * 2014-12-30 2019-05-14 Excalibur Ip, Llc Method and system for entity linking
CN110717331B (zh) * 2019-10-21 2023-10-24 北京爱医博通信息技术有限公司 一种基于神经网络的中文命名实体识别方法、装置、设备以及存储介质
CN111160012B (zh) * 2019-12-26 2024-02-06 上海金仕达卫宁软件科技有限公司 医学术语识别方法、装置和电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7403891B2 (en) * 2003-10-23 2008-07-22 Electronics And Telecommunications Research Institute Apparatus and method for recognizing biological named entity from biological literature based on UMLS
CN109614493A (zh) * 2018-12-29 2019-04-12 重庆邂智科技有限公司 一种基于监督词向量的文本缩写识别方法及系统
CN110134965A (zh) * 2019-05-21 2019-08-16 北京百度网讯科技有限公司 用于信息处理的方法、装置、设备和计算机可读存储介质
CN110348015A (zh) * 2019-07-12 2019-10-18 北京百奥知信息科技有限公司 一种自动标注医学文本中实体的方法
CN111126040A (zh) * 2019-12-26 2020-05-08 贵州大学 一种基于深度边界组合的生物医学命名实体识别方法

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113889281A (zh) * 2021-11-17 2022-01-04 重庆邮电大学 一种中文医疗智能实体识别方法、装置及计算机设备
CN113889281B (zh) * 2021-11-17 2024-05-03 华美浩联医疗科技(北京)有限公司 一种中文医疗智能实体识别方法、装置及计算机设备
CN114579709A (zh) * 2022-03-15 2022-06-03 西南交通大学 一种基于知识图谱的智能问答意图识别方法
CN114579709B (zh) * 2022-03-15 2023-04-07 西南交通大学 一种基于知识图谱的智能问答意图识别方法
CN115841113A (zh) * 2023-02-24 2023-03-24 山东云天安全技术有限公司 一种域名标号检测方法、存储介质及电子设备
CN116127960A (zh) * 2023-04-17 2023-05-16 广东粤港澳大湾区国家纳米科技创新研究院 信息抽取方法、装置、存储介质及计算机设备
CN116127960B (zh) * 2023-04-17 2023-06-23 广东粤港澳大湾区国家纳米科技创新研究院 信息抽取方法、装置、存储介质及计算机设备
CN116226114A (zh) * 2023-05-09 2023-06-06 荣耀终端有限公司 一种数据处理方法、系统及存储介质
CN116226114B (zh) * 2023-05-09 2023-10-20 荣耀终端有限公司 一种数据处理方法、系统及存储介质

Also Published As

Publication number Publication date
CN112257446A (zh) 2021-01-22

Similar Documents

Publication Publication Date Title
WO2021179708A1 (zh) 命名实体识别方法、装置、计算机设备及可读存储介质
CN109670179B (zh) 基于迭代膨胀卷积神经网络的病历文本命名实体识别方法
WO2018207723A1 (ja) 要約生成装置、要約生成方法及びコンピュータプログラム
WO2021151353A1 (zh) 医学实体关系抽取方法、装置、计算机设备及可读存储介质
CN106844351B (zh) 一种面向多数据源的医疗机构组织类实体识别方法及装置
CN111310470B (zh) 一种融合字词特征的中文命名实体识别方法
WO2021151270A1 (zh) 图像结构化数据提取方法、装置、设备及存储介质
WO2022048363A1 (zh) 网站分类方法、装置、计算机设备及存储介质
CN110569343B (zh) 一种基于问答的临床文本结构化方法
CN112131881A (zh) 信息抽取方法及装置、电子设备、存储介质
CN109815478A (zh) 基于卷积神经网络的药化实体识别方法及系统
CN106933802B (zh) 一种面向多数据源的社保类实体识别方法及装置
CN113010679A (zh) 问答对生成方法、装置、设备及计算机可读存储介质
WO2020170906A1 (ja) 生成装置、学習装置、生成方法及びプログラム
CN114048729A (zh) 医学文献评价方法、电子设备、存储介质和程序产品
TW202123026A (zh) 資料歸檔方法、裝置、電腦裝置及存儲介質
CN116662488A (zh) 业务文档检索方法、装置、设备及存储介质
CN113627186B (zh) 基于人工智能的实体关系检测方法及相关设备
WO2022127124A1 (zh) 基于元学习的实体类别识别方法、装置、设备和存储介质
WO2022073341A1 (zh) 基于语音语义的疾病实体匹配方法、装置及计算机设备
CN117501283A (zh) 文本到问答模型系统
WO2022141855A1 (zh) 文本正则方法、装置、电子设备及存储介质
CN113505213A (zh) 关键句提取方法、系统、计算机可读存储介质
CN112163082A (zh) 一种意图识别方法、装置、电子设备及存储介质
Sinha et al. IAI@ SocialDisNER: Catch me if you can! Capturing complex disease mentions in tweets

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20924673

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20924673

Country of ref document: EP

Kind code of ref document: A1