WO2021068329A1 - 中文命名实体识别方法、装置及计算机可读存储介质 - Google Patents

中文命名实体识别方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2021068329A1
WO2021068329A1 PCT/CN2019/117339 CN2019117339W WO2021068329A1 WO 2021068329 A1 WO2021068329 A1 WO 2021068329A1 CN 2019117339 W CN2019117339 W CN 2019117339W WO 2021068329 A1 WO2021068329 A1 WO 2021068329A1
Authority
WO
WIPO (PCT)
Prior art keywords
entity recognition
named entity
standard
text set
word vector
Prior art date
Application number
PCT/CN2019/117339
Other languages
English (en)
French (fr)
Inventor
邓悦
金戈
徐亮
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2021068329A1 publication Critical patent/WO2021068329A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • G06F16/355Class or cluster creation or modification

Definitions

  • This application relates to the field of artificial intelligence technology, and in particular to a method, device and computer-readable storage medium based on Chinese named entity recognition.
  • Named entity recognition refers to the recognition of named entities such as person names, place names, and organization names in a discourse.
  • Chinese named entities are named entities based on Chinese. They are widely and successfully used in information extraction, information retrieval, information recommendation, and machines. Translation and other tasks.
  • the existing technical solutions include word-based and word-based methods.
  • Both of the word-based methods need to segment Chinese sentences first, and then perform named entity recognition on the result of word segmentation, which makes The result of named entities depends on the accuracy of word segmentation; in addition, the disadvantage of the word-based method is the lack of semantic information of Chinese vocabulary, because different words have different meanings in different vocabulary, such as "today” and “weather” ",” “Go on horse” and "Immediately", the lack of vocabulary information will inevitably greatly reduce the accuracy of the model.
  • This application provides a Chinese named entity recognition method, device and computer readable storage medium, the main purpose of which is to provide a highly accurate Chinese named entity recognition solution.
  • a Chinese named entity recognition method provided by this application includes:
  • Receive a text set input by a user calculate the text set to obtain a word vector set, and input the word vector set into the trained named entity recognition model to obtain a named recognition result.
  • this application also provides a Chinese named entity recognition device, which includes a memory and a processor, and the memory stores a Chinese named entity recognition program that can run on the processor.
  • the Chinese named entity recognition program is executed by the processor, the following steps are implemented:
  • Receive a text set input by a user calculate the text set to obtain a word vector set, and input the word vector set into the trained named entity recognition model to obtain a named recognition result.
  • the present application also provides a computer-readable storage medium having a Chinese named entity recognition program stored on the computer-readable storage medium, and the Chinese named entity recognition program can be used by one or more processors. Execute to realize the steps of the Chinese named entity recognition method as described above.
  • This application can ensure the purity of the data by denoising, removing stop words, and labeling the original text set containing Chinese named entities.
  • the category of the original text set is preliminarily determined according to the clustering operation, and after construction After the word vector is optimized by the probability model, it is input into the named entity recognition model for training.
  • the named entity can be accurately identified through the preliminary data processing, preliminary category judgment, word vector optimization and model recognition. Therefore, the Chinese named entity recognition method, device, and computer-readable storage medium proposed in this application can implement precise named entity functions.
  • FIG. 1 is a schematic flowchart of a method for identifying a Chinese named entity provided by an embodiment of this application;
  • FIG. 2 is a schematic diagram of the internal structure of a Chinese named entity recognition device provided by an embodiment of the application;
  • FIG. 3 is a schematic diagram of modules of a Chinese named entity recognition program in a Chinese named entity recognition device provided by an embodiment of the application.
  • This application provides a Chinese named entity recognition method.
  • FIG. 1 it is a schematic flowchart of a Chinese named entity recognition method provided by an embodiment of this application.
  • the method can be executed by a device, and the device can be implemented by software and/or hardware.
  • the Chinese named entity recognition method includes:
  • S1 Receive an original text set containing Chinese named entities, and obtain a standard text set after denoising, removing stop words, and labeling the original text set.
  • the named entity is a person's name, an organization name, a place name, and all other entities identified by a name
  • the Chinese named entity is a named entity based on the Chinese language.
  • the Chinese named entities of the text data A include: “Shanghai, Nanjing Road, Asia , Zhejiang native”.
  • the received original text set includes text data from the Internet
  • the original text set contains a lot of noise, such as hyperlinks, webpage tags, etc., and the noise will affect the recognition of Chinese named entities. It is necessary to perform denoising processing on the original text set.
  • the denoising can be filtered using regular expressions based on programming languages, which can remove noise such as numbers, emoticons, and special symbols such as URL, "@", and "#".
  • stop words refer to words that have no practical meaning and have little effect on the recognition of Chinese named entities in the Chinese text. Because stop words appear frequently, including commonly used pronouns, prepositions, etc., if Retaining the stop words will create a computational burden for the entire Chinese named entity recognition, and even affect the recognition accuracy.
  • the stop word removal may adopt a stop word list filtering method, and a pre-built stop word list is matched with the words in the original text set one by one. If the match is successful, then this word is a stop word. Word, you need to delete the word.
  • the labeling processing includes: performing word segmentation processing on the original text set to obtain a word segmentation text set, and labeling the words in the word segmentation text set according to a preset labeling rule, and when the word tagging in the word segmentation text set is completed Then, according to the word segmentation text collection, it is reorganized into a text collection to obtain a standard text collection.
  • the preset labeling rule may adopt a combined standard rule.
  • the labeling rule is a combination of ⁇ B, I, E, S ⁇ and ⁇ PER, ORG, LOC ⁇ two sets of contents. If the O symbol is added subsequently, it means Not any named entity.
  • I, E, S ⁇ , B represents the first character of the entity
  • I represents the middle character of the entity
  • E represents the last character of the entity
  • S represents that a single character can be represented as an entity
  • the ⁇ PER, ORG ,LOC ⁇ in PER means the name of the person
  • ORG means the name of the organization
  • LOC means the name of the place.
  • the original text set has text data B that is: "Shanghai plans to achieve a per capita GDP of five thousand US dollars by the end of this century.”
  • the standard text data obtained after the standard processing is: " ⁇ /B- LOC sea / E-LOC plan / BO plan / EO to / SO book / SO world / BO period / EO end / SO actual / BO present / EO people / BO average / EO country / BO internal / EO production / BO production / EO total/BO value/EO five/BO thousand/IO US/IO yuan/EO/./SO".
  • the clustering operation includes: randomly initializing k initial clusters and cluster centers Center k of the k initial clusters, and training the cluster centers according to a cluster update method to obtain training values,
  • the error of the training value is calculated based on the square error, if the error is greater than the preset error threshold, the training is continued, and if the error is less than the preset error threshold, the training is exited to obtain the number of clusters and the cluster center.
  • the number of clusters refers to the number of different categories that can be obtained by the standard text collection according to the clustering operation, and the cluster center refers to the center position of each cluster.
  • cluster update method is:
  • x i is the text data of the standard text set
  • i is the data number
  • C k is the standard text set.
  • the error of calculating the training value based on the square error is:
  • J is the error of the training value
  • K is the number of texts in the standard text set, that is, the value of the initial cluster is between [1, K]
  • dist(x i , Center k ) represents calculation The distance between the data of the standard text set and the cluster center Center k.
  • the distance calculation formula of dist(x i , Center k ) can adopt multiple methods, such as Euclidean distance calculation method, Manhattan distance, Mahalanobis distance, and the like.
  • the posterior probability model is:
  • x) is the posterior probability model
  • w i is the word vector in the standard word vector set
  • x is the text in the standard text set
  • x t is the number t in the cluster center
  • j is the word vector number
  • n is the number of clusters
  • w j ) is the prior probability
  • w j ) is:
  • c t represents the number of standard texts numbered t at the center of the cluster
  • D i represents the sample composed of the word vector w i
  • is the adjustment coefficient.
  • the pre-built named entity recognition model includes a sentence combination layer, a connection layer and a classification layer.
  • the standard word vector set is input to the sentence combination layer to solve the sentence combination probability to obtain the sentence combination with the maximum probability, and the sentence combination with the maximum probability is input to the connection layer for connection operation, based on the classification layer
  • Named entity recognition is performed on the sentence completed by the connection operation to obtain a recognition result set, and the recognition result set is compared with the standard text set until the accuracy of the comparison is greater than the preset accuracy, the named entity recognition
  • the model exits the training to obtain the trained named entity recognition model.
  • the method for solving the sentence combination probability is:
  • w i , w i+1 ...w n represents the word vector of the standard word vector set
  • f LSTM represents the model formula based on the long short-term memory network (LSTM) model to solve the maximum probability of the word vector
  • s j represents the standard word vector set
  • R represents the sentence combination with the maximum probability
  • connection operation is:
  • S represents the sentence after the connection operation
  • R i represents different sentence combinations
  • m is the total number of the different sentence combinations, preferably, the for:
  • R) represents the probability value of R i in all sentence combinations
  • w i represents the word vector of the above-mentioned standard word vector set
  • softmax(y j ) represents named entity recognition based on the softmax function
  • y j represents the part-of-speech result of word j
  • n is the number of clusters mentioned above
  • Sk represents the total number of sentences under the center of the k-th cluster.
  • S5. Receive a text set input by a user, calculate the text set to obtain a word vector set, and input the word vector set into the trained named entity recognition model to obtain a named recognition result.
  • the word vector set obtained by calculating the text set may be sequentially executed according to steps S2 to S3 to obtain the word vector set.
  • the text set entered by the user is: "I have loved traveling since I was a child, and traveling has become a part of my life. Throughout the ages, there have been countless celebrities and everyone likes to "see the mountains and play", such as Xu Xiake and Ban Chao in China. , Zhang Qian, foreign Marco Polo, etc., their footprints all over the world, but also left a valuable fortune for future generations. I also want to make their footprints all over the world's famous mountains and rivers, such as Mount Everest, the Statue of Liberty, etc. "After the trained named entity recognition model, the named entity set is obtained as: "Xu Xiake, Ban Chao, Zhang Qian, Marco Polo, Mount Everest, Statue of Liberty".
  • the invention also provides a Chinese named entity recognition device.
  • FIG. 2 it is a schematic diagram of the internal structure of a Chinese named entity recognition device provided by an embodiment of this application.
  • the Chinese named entity recognition device 1 may be a PC (Personal Computer, personal computer), or a terminal device such as a smart phone, a tablet computer, or a portable computer, or a server.
  • the Chinese named entity recognition device 1 at least includes a memory 11, a processor 12, a communication bus 13, and a network interface 14.
  • the memory 11 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 11 may be an internal storage unit of the Chinese named entity recognition device 1 in some embodiments, for example, the hard disk of the Chinese named entity recognition device 1.
  • the memory 11 may also be an external storage device of the Chinese named entity recognition device 1, such as a plug-in hard disk equipped on the Chinese named entity recognition device 1, a smart media card (SMC), and a secure digital (Secure Digital, SD) card, flash card (Flash Card), etc.
  • the memory 11 may also include both an internal storage unit of the Chinese named entity recognition apparatus 1 and an external storage device.
  • the memory 11 can be used not only to store application software and various data installed in the Chinese named entity recognition device 1, such as the code of the Chinese named entity recognition program 01, etc., but also to temporarily store data that has been output or will be output.
  • the processor 12 may be a central processing unit (CPU), controller, microcontroller, microprocessor, or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the implementation of the Chinese named entity recognition program 01, etc.
  • CPU central processing unit
  • controller microcontroller
  • microprocessor or other data processing chip, for running program codes or processing stored in the memory 11 Data, such as the implementation of the Chinese named entity recognition program 01, etc.
  • the communication bus 13 is used to realize the connection and communication between these components.
  • the network interface 14 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is usually used to establish a communication connection between the apparatus 1 and other electronic devices.
  • the device 1 may also include a user interface.
  • the user interface may include a display (Display) and an input unit such as a keyboard (Keyboard).
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, and the like.
  • the display can also be appropriately called a display screen or a display unit, which is used to display the information processed in the Chinese named entity recognition device 1 and to display a visualized user interface.
  • Figure 2 only shows the Chinese named entity recognition device 1 with components 11-14 and the Chinese named entity recognition program 01. Those skilled in the art can understand that the structure shown in Figure 1 does not constitute a Chinese named entity recognition device
  • the definition of 1 may include fewer or more components than shown, or a combination of certain components, or different component arrangements.
  • the Chinese named entity recognition program 01 is stored in the memory 11; when the processor 12 executes the Chinese named entity recognition program 01 stored in the memory 11, the following steps are implemented:
  • Step 1 Receive an original text set containing Chinese named entities, and perform denoising, stop word removal and labeling processing on the original text set to obtain a standard text set.
  • the named entity is a person's name, an organization name, a place name, and all other entities identified by a name
  • the Chinese named entity is a named entity based on the Chinese language.
  • the received original text set includes text data from the Internet
  • the original text set contains a lot of noise, such as hyperlinks, webpage tags, etc., and the noise will affect the recognition of Chinese named entities. It is necessary to perform denoising processing on the original text set.
  • the denoising can be filtered using regular expressions based on programming languages, which can remove noise such as numbers, emoticons, and special symbols such as URL, "@", and "#".
  • stop words refer to words that have no practical meaning and have little effect on the recognition of Chinese named entities in the Chinese text. Because stop words appear frequently, including commonly used pronouns, prepositions, etc., if Retaining the stop words will create a computational burden for the entire Chinese named entity recognition, and even affect the recognition accuracy.
  • the stop word removal may adopt a stop word list filtering method, and a pre-built stop word list is matched with the words in the original text set one by one. If the match is successful, then this word is a stop word. Word, you need to delete the word.
  • the labeling processing includes: performing word segmentation processing on the original text set to obtain a word segmentation text set, and labeling the words in the word segmentation text set according to a preset labeling rule, and when the word tagging in the word segmentation text set is completed Then, according to the word segmentation text collection, it is reorganized into a text collection to obtain a standard text collection.
  • the preset labeling rule may adopt a combined standard rule.
  • the labeling rule is a combination of ⁇ B, I, E, S ⁇ and ⁇ PER, ORG, LOC ⁇ two sets of contents. If the O symbol is added subsequently, it means Not any named entity.
  • I, E, S ⁇ , B represents the first character of the entity
  • I represents the middle character of the entity
  • E represents the last character of the entity
  • S represents that a single character can be represented as an entity
  • the ⁇ PER, ORG ,LOC ⁇ in PER means the name of the person
  • ORG means the name of the organization
  • LOC means the name of the place.
  • the original text set has text data B that is: "Shanghai plans to achieve a per capita GDP of five thousand US dollars by the end of this century.”
  • the standard text data obtained after the standard processing is: " ⁇ /B- LOC sea / E-LOC plan / BO plan / EO to / SO book / SO world / BO period / EO end / SO actual / BO present / EO people / BO average / EO country / BO internal / EO production / BO production / EO total/BO value/EO five/BO thousand/IO US/IO yuan/EO/./SO".
  • Step 2 Perform a clustering operation on the standard text set to obtain the number of clusters and cluster centers.
  • the clustering operation includes: randomly initializing k initial clusters and cluster centers Center k of the k initial clusters, and training the cluster centers according to a cluster update method to obtain training values,
  • the error of the training value is calculated based on the square error, if the error is greater than the preset error threshold, the training is continued, and if the error is less than the preset error threshold, the training is exited to obtain the number of clusters and the cluster center.
  • the number of clusters refers to the number of different categories that can be obtained by the standard text collection according to the clustering operation, and the cluster center refers to the center position of each cluster.
  • cluster update method is:
  • x i is the text data of the standard text set
  • i is the data number
  • C k is the standard text set.
  • the error of calculating the training value based on the square error is:
  • J is the error of the training value
  • K is the number of texts in the standard text set, that is, the value of the initial cluster is between [1, K]
  • dist(x i , Center k ) represents calculation The distance between the data of the standard text set and the cluster center Center k.
  • the distance calculation formula of dist(x i , Center k ) can adopt multiple methods, such as Euclidean distance calculation method, Manhattan distance, Mahalanobis distance, and the like.
  • Step 3 Based on the number of clusters and the cluster centers, a posterior probability model is established for the standard text set, and the posterior probability model is optimized to obtain a standard word vector set.
  • the posterior probability model is:
  • x) is the posterior probability model
  • w i is the word vector in the standard word vector set
  • x is the text in the standard text set
  • x t is the number t in the cluster center
  • j is the word vector number
  • n is the number of clusters
  • w j ) is the prior probability
  • w j ) is:
  • c t represents the number of standard texts numbered t at the center of the cluster
  • D i represents the sample composed of the word vector w i
  • is the adjustment coefficient.
  • Step 4 Input the standard word vector set into the pre-built named entity recognition model and train the trained named entity recognition model.
  • the pre-built named entity recognition model includes a sentence combination layer, a connection layer and a classification layer.
  • the standard word vector set is input to the sentence combination layer to solve the sentence combination probability to obtain the sentence combination with the maximum probability, and the sentence combination with the maximum probability is input to the connection layer for connection operation, based on the classification layer
  • Named entity recognition is performed on the sentence completed by the connection operation to obtain a recognition result set, and the recognition result set is compared with the standard text set until the accuracy of the comparison is greater than the preset accuracy, the named entity recognition
  • the model exits the training to obtain the trained named entity recognition model.
  • the method for solving the sentence combination probability is:
  • w i , w i+1 ...w n represents the word vector of the standard word vector set
  • f LSTM represents the model formula based on the long short-term memory network (LSTM) model to solve the maximum probability of the word vector
  • s j represents the standard word vector set
  • R represents the sentence combination with the maximum probability
  • connection operation is:
  • S represents the sentence after the connection operation
  • R i represents different sentence combinations
  • m is the total number of the different sentence combinations, preferably, the for:
  • R) represents the probability value of R i in all sentence combinations
  • w i represents the word vector of the above-mentioned standard word vector set
  • softmax(y j ) represents named entity recognition based on the softmax function
  • y j represents the part-of-speech result of word j
  • n is the number of clusters mentioned above
  • Sk represents the total number of sentences under the center of the k-th cluster.
  • Step 5 Receive a text set input by a user, calculate the text set to obtain a word vector set, and input the word vector set into the trained named entity recognition model to obtain a named recognition result.
  • the word vector set obtained by calculating the text set may be executed in order according to the step 2 to step 3 to obtain the word vector set.
  • the text set entered by the user is: "I have loved traveling since I was a child, and traveling has become a part of my life. Throughout the ages, there have been countless celebrities and everyone likes to "see the mountains and play", such as Xu Xiake and Ban Chao in China. , Zhang Qian, foreign Marco Polo, etc., their footprints all over the world, but also left a valuable fortune for future generations. I also want to make their footprints all over the world's famous mountains and rivers, such as Mount Everest, the Statue of Liberty, etc. "After the trained named entity recognition model, the named entity set is obtained as: "Xu Xiake, Ban Chao, Zhang Qian, Marco Polo, Mount Everest, Statue of Liberty".
  • the Chinese named entity recognition program can also be divided into one or more modules, and the one or more modules are stored in the memory 11 and run by one or more processors (this embodiment It is executed by the processor 12) to complete this application.
  • the module referred to in this application refers to a series of computer program instruction segments that can complete specific functions, and is used to describe the execution process of the Chinese named entity recognition program in the Chinese named entity recognition device .
  • FIG. 3 a schematic diagram of the program modules of the Chinese named entity recognition program in an embodiment of the Chinese named entity recognition device of this application.
  • the Chinese named entity recognition program can be divided into data receiving and The processing module 10, the number of clusters, the cluster center and word vector calculation module 20, the named entity recognition model training module 30, and the named recognition result output module 40 are exemplary:
  • the data receiving and processing module 10 is configured to receive an original text set containing Chinese named entities, and perform denoising, de-stop word and labeling processing on the original text set to obtain a standard text set.
  • the number of clusters, cluster centers, and word vector calculation module 20 is used to: perform a clustering operation on the standard text set to obtain the number of clusters and cluster centers, based on the number of clusters and the cluster
  • the class center establishes a posterior probability model for the standard text set, and optimizes the posterior probability model to obtain a standard word vector set.
  • the named entity recognition model training module 30 is used to input the standard word vector set into a pre-built named entity recognition model to train a trained named entity recognition model.
  • the named recognition result output module 40 is configured to receive a text set input by a user, calculate the text set to obtain a word vector set, and input the word vector set into the trained named entity recognition model to obtain a named recognition result.
  • the above-mentioned data receiving and processing module 10 the number of clusters, the cluster center and word vector calculation module 20, the named entity recognition model training module 30, the named recognition result output module 40 and other program modules are implemented when they are executed. It is substantially the same as the above-mentioned embodiment, and will not be repeated here.
  • an embodiment of the present application also proposes a computer-readable storage medium having a Chinese named entity recognition program stored on the computer-readable storage medium, and the Chinese named entity recognition program can be executed by one or more processors to To achieve the following operations:
  • An original text set containing Chinese named entities is received, and a standard text set is obtained after denoising, removing stop words, and labeling the original text set.
  • the test probability model obtains the standard word vector set.
  • the standard word vector set is input into a pre-built named entity recognition model to obtain a trained named entity recognition model.
  • Receive a text set input by a user calculate the text set to obtain a word vector set, and input the word vector set into the trained named entity recognition model to obtain a named recognition result.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Character Discrimination (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请涉及一种人工智能技术,揭露了一种中文命名实体识别方法,包括:接收原始文本集并进行去噪、去停用词及标注处理后得到标准文本集,将所述标准文本集进行聚类操作得到类簇个数及聚类中心,基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集,将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型,接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。本申请还提出一种中文命名实体识别装置以及一种计算机可读存储介质。本申请可以实现精准的中文命名实体识别功能。

Description

中文命名实体识别方法、装置及计算机可读存储介质
本申请基于巴黎公约申明享有2019年10月10日递交的申请号为CN201910965462.X、名称为“中文命名实体识别方法、装置及计算机可读存储介质”的中国专利申请的优先权,该中国专利申请的整体内容以参考的方式结合在本申请中。
技术领域
本申请涉及人工智能技术领域,尤其涉及一种基于中文命名实体识别方法、装置及计算机可读存储介质。
背景技术
命名实体识别是指识别语目中人名、地名、组织机构名等命名实体,中文命名实体是以中文为语言基础的命名实体,被广泛且成功地应用于信息抽取、信息检索、信息推荐和机器翻译等任务中。目前对于中文命名实体识别,现有技术方案有基于词和基于字两种方法,所述基于词的方法都需要先对中文语句进行分词,再在分词的结果上进行命名实体识别,这就使得命名实体的结果依赖于分词的准确性;此外,基于字的方法的不足在于缺失了中文词汇的语义信息,因为不同的字在不同的词汇中含义是有差异的,比如“今天”和“天气”,“上马”和“马上”,缺失了词汇信息必然会极大地降低模型的准确率。
发明内容
本申请提供一种中文命名实体识别方法、装置及计算机可读存储介质,其主要目的在于提供一种准确率高的中文命名实体识别方案。
为实现上述目的,本申请提供的一种中文命名实体识别方法,包括:
接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集;
将所述标准文本集进行聚类操作得到类簇个数及聚类中心;
基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集;
将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型;
接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
此外,为实现上述目的,本申请还提供一种中文命名实体识别装置,该装置包括存储器和处理器,所述存储器中存储有可在所述处理器上运行的中文命名实体识别程序,所述中文命名实体识别程序被所述处理器执行时实现如下步骤:
接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集;
将所述标准文本集进行聚类操作得到类簇个数及聚类中心;
基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集;
将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型;
接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有中文命名实体识别程序,所述中文命名实体识别程序可被一个或者多个处理器执行,以实现如上所述的中文命名实体识别方法的步骤。
本申请通过对包含中文命名实体的原始文本集进行去噪、去停用词及标注处理可以保证数据的纯洁度,同时根据聚类操作初步确定了所述原始文本集的类别,并通过构建后验概率模型优化词向量后输入至命名实体识别模型中训练,总结来说通过前期数据处理、初步类别判断、词向量优化和模型识别可以精确的识别出命名实体。因此本申请提出的中文命名实体识别方法、装置及计算机可读存储介质,可以实现精准的命名实体功能。
附图说明
图1为本申请一实施例提供的中文命名实体识别方法的流程示意图;
图2为本申请一实施例提供的中文命名实体识别装置的内部结构示意图;
图3为本申请一实施例提供的中文命名实体识别装置中中文命名实体识别程序的模块示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
本申请提供一种中文命名实体识别方法。参照图1所示,为本申请一实施例提供的中文命名实体识别方法的流程示意图。该方法可以由一个装置执行,该装置可以由软件和/或硬件实现。
在本实施例中,中文命名实体识别方法包括:
S1、接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集。
优选地,命名实体就是人名、机构名、地名以及其他所有以名称为标识的实体,所述中文命名实体是以中文为语言基础的命名实体。如所述原始文本集中有文本数据A为:“今天我有幸去上海,印象最深的是南京路,它是亚洲最繁华的商业街之一,是上海商业的一扇门面,也是许多上海商业走向全国、走向世界的一个平台。听说此刻的南京路经商的多数是浙江人,由此在我心里十分佩服他们”,则所述文本数据A的中文命名实体包括:“上海,南京路,亚洲,浙江人”。
进一步,由于所述接收的原始文本集包括来自于网络的文本数据,因此所述原始文本集包含大量的噪声,例如超链接、网页标签等,所述噪声会对中文命名实体识别产生影响,因此需要对所述原始文本集进行去噪处理。所述去噪可采用基于编程语言为基础的正则表达式进行过滤,可去除如数字、表情符号以及特殊符号如URL、“@”、“#”噪声。
本申请较佳实施例中,停用词指没有什么实际意义且在中文文本内对中文命名实体识别没有什么影响的词,由于停用词出现频率高,包括常用的代 词、介词等,因此若保留停用词,会对整个中文命名实体识别产生计算负担,甚至影响识别准确率。优选地,所述去停用词可采用停用词表过滤法,通过预先构建好的停用词表和所述原始文本集的词语进行一一匹配,如果匹配成功,那么这个词语就是停用词,需要将该词删除。
所述标注处理包括:将所述原始文本集进行分词处理得到分词文本集,根据预先设定的标注规则对所述分词文本集内的词语进行标注,当完成所述分词文本集内的词语标注后,根据所述分词文本集重新组建成文本集得到标准文本集。
所述预先设定的标注规则可采用组合标准规则。如所述原始文本集有文本数据X,X=x_1,x_2,x_3,……,x_n,其中x_1直至x_n表示所述文本数据X中的分词。本申请较佳实施例中,所述标注规则由{B,I,E,S}与{PER,ORG,LOC}两个集合内容两两组合而成,若后续继续加上O符号,则表示不是任何命名实体。进一步地,所述{B,I,E,S}中B表示实体的首字,I表示实体的中字,E表示实体的尾字,S表示单独字符可表示成实体;所述{PER,ORG,LOC}中的PER表示人名,ORG表示机构名,LOC表示地名。
例如,所述原始文本集有文本数据B为:“上海计划到本世纪末实现人均国内生产总值五千美元。”,则经过所述标准处理后得到的标准文本数据为:“上/B-LOC海/E-LOC计/B-O划/E-O到/S-O本/S-O世/B-O纪/E-O末/S-O实/B-O现/E-O人/B-O均/E-O国/B-O内/E-O生/B-O产/E-O总/B-O值/E-O五/B-O千/I-O美/I-O元/E-O/。/S-O”。
S2、将所述标准文本集进行聚类操作得到类簇个数及聚类中心。
较佳地,所述聚类操作包括:随机初始化k个初始类簇和所述k个初始类簇的聚类中心Center k,根据聚类更新方法对所述聚类中心进行训练得到训练值,基于平方误差计算所述训练值的误差,若所述误差大于预设误差阈值则继续训练,若所述误差小于预设误差阈值则退出训练得到类簇个数及聚类中心。
所述类簇个数是指所述标准文本集根据所述聚类操作后可得到多少个类别不同的个数,所述聚类中心是指每个类簇所在的中心位置。
进一步地,所述聚类更新方法为:
Figure PCTCN2019117339-appb-000001
其中,x i为所述标准文本集的文本数据,i为数据编号,C k为所述标准文本集。
所述基于平方误差计算所述训练值的误差为:
Figure PCTCN2019117339-appb-000002
其中,J为所述训练值的误差,K为所述标准文本集的文本数量,即所述初始类簇的取值在[1,K]之间,dist(x i,Center k)表示计算所述标准文本集的数据与所述聚类中心Center k的距离。
优选地,所述dist(x i,Center k)的距离计算公式可采用多种方式,如欧式距离计算方法、曼哈顿距离、马氏距离等。
S3、基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集。
所述后验概率模型为:
Figure PCTCN2019117339-appb-000003
其中,P(w j|x)为所述后验概率模型,w i为所述标准词向量集中的词向量,x为所述标准文本集的文本,x t为在类簇中心编号为t的文本,j为所述词向量编号,n为所述类簇个数,p(x t|w j)为先验概率,所述先验概率p(x t|w j)为:
Figure PCTCN2019117339-appb-000004
其中,c t表示在类簇中心编号为t的标准文本数,D i表示所述词向量w i构成的样本,
Figure PCTCN2019117339-appb-000005
表示在x t情况下,所述词向量w i构成的样本,所述
Figure PCTCN2019117339-appb-000006
与所述聚类中心有关,α为调节系数。
S4、将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型。
优选地,所述预先构建的命名实体识别模型包括句子组合层、连接层和分类层。
较佳地,所述标准词向量集输入至所述句子组合层进行句子组合概率求解得到最大概率的句子组合,将所述最大概率的句子组合输入至所述连接层进行连接操作,基于分类层对所述连接操作完成的句子进行命名实体识别得到识别结果集,将所述识别结果集与所述标准文本集进行比对,直至比对的 准确率大于预设准确率,所述命名实体识别模型退出训练得到训练后的命名实体识别模型。
优选地,所述句子组合概率求解方法为:
Figure PCTCN2019117339-appb-000007
Figure PCTCN2019117339-appb-000008
其中,w i,w i+1…w n表示所述标准词向量集的词向量,f LSTM表示基于长短期记忆网络(LSTM)模型下求解词向量最大化概率的模型公式,
Figure PCTCN2019117339-appb-000009
表示最大化的词向量,s j表示所述标准词向量集,R表示所述最大概率的句子组合。
较佳地,所述连接操作为:
Figure PCTCN2019117339-appb-000010
其中,S表示所述连接操作后的句子,R i表示不同的句子组合,
Figure PCTCN2019117339-appb-000011
为所述不同的句子组合的概率,m为所述不同的句子组合的总数,较佳地,所述
Figure PCTCN2019117339-appb-000012
为:
Figure PCTCN2019117339-appb-000013
其中,p(R i|R)表示R i在所有句子组合中出现的概率值,w i表示上述标准词向量集的词向量,
Figure PCTCN2019117339-appb-000014
表示最大化的词向量。
进一步地,所述命名实体识别的方法为:
Figure PCTCN2019117339-appb-000015
其中,softmax(y j)表示基于softmax函数进行的命名实体识别,y j表示词j的词性结果,n为上述类簇个数,S k表示在第k个类簇中心下的句子总数。
S5、接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
优选地,计算所述文本集得到词向量集可根据S2~S3步骤顺序执行得到所述词向量集。
较佳地,如用户输入的文本集为:“我从小就爱旅行,旅行已成为我生活中的一部分。古往今来,有无数的名人大家都喜欢“游山玩水”,如中国的徐霞客、班超、张骞,外国的马可波罗等,他们的足迹遍及全世界,也为后人留下了宝贵的财富。我也想像他们那样让自己的足迹遍布世界各地的名山大川,如珠穆朗玛峰,自由女神像等”,经过所述训练后的命名实体识别模型后得到 了命名实体集合为:“徐霞客、班超、张骞、马可波罗、珠穆朗玛峰、自由女神像”。
发明还提供一种中文命名实体识别装置。参照图2所示,为本申请一实施例提供的中文命名实体识别装置的内部结构示意图。
在本实施例中,所述中文命名实体识别装置1可以是PC(Personal Computer,个人电脑),或者是智能手机、平板电脑、便携计算机等终端设备,也可以是一种服务器等。该中文命名实体识别装置1至少包括存储器11、处理器12,通信总线13,以及网络接口14。
其中,存储器11至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器11在一些实施例中可以是中文命名实体识别装置1的内部存储单元,例如该中文命名实体识别装置1的硬盘。存储器11在另一些实施例中也可以是中文命名实体识别装置1的外部存储设备,例如中文命名实体识别装置1上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器11还可以既包括中文命名实体识别装置1的内部存储单元也包括外部存储设备。存储器11不仅可以用于存储安装于中文命名实体识别装置1的应用软件及各类数据,例如中文命名实体识别程序01的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器12在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器11中存储的程序代码或处理数据,例如执行中文命名实体识别程序01等。
通信总线13用于实现这些组件之间的连接通信。
网络接口14可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该装置1与其他电子设备之间建立通信连接。
可选地,该装置1还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting  Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在中文命名实体识别装置1中处理的信息以及用于显示可视化的用户界面。
图2仅示出了具有组件11-14以及中文命名实体识别程序01的中文命名实体识别装置1,本领域技术人员可以理解的是,图1示出的结构并不构成对中文命名实体识别装置1的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图2所示的装置1实施例中,存储器11中存储有中文命名实体识别程序01;处理器12执行存储器11中存储的中文命名实体识别程序01时实现如下步骤:
步骤一、接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集。
优选地,命名实体就是人名、机构名、地名以及其他所有以名称为标识的实体,所述中文命名实体是以中文为语言基础的命名实体。如所述原始文本集中有文本数据A为:“今天我有幸去上海,印象最深的是南京路,它是亚洲最繁华的商业街之一,是上海商业的一扇门面,也是许多上海商业走向全国、走向世界的一个平台。听说此刻的南京路经商的多数是浙江人,由此在我心里十分佩服他们”,则所述文本数据A的中文命名实体包括:“上海,南京路,亚洲,浙江人”。
进一步,由于所述接收的原始文本集包括来自于网络的文本数据,因此所述原始文本集包含大量的噪声,例如超链接、网页标签等,所述噪声会对中文命名实体识别产生影响,因此需要对所述原始文本集进行去噪处理。所述去噪可采用基于编程语言为基础的正则表达式进行过滤,可去除如数字、表情符号以及特殊符号如URL、“@”、“#”噪声。
本申请较佳实施例中,停用词指没有什么实际意义且在中文文本内对中文命名实体识别没有什么影响的词,由于停用词出现频率高,包括常用的代词、介词等,因此若保留停用词,会对整个中文命名实体识别产生计算负担,甚至影响识别准确率。优选地,所述去停用词可采用停用词表过滤法,通过预先构建好的停用词表和所述原始文本集的词语进行一一匹配,如果匹配成功,那么这个词语就是停用词,需要将该词删除。
所述标注处理包括:将所述原始文本集进行分词处理得到分词文本集,根据预先设定的标注规则对所述分词文本集内的词语进行标注,当完成所述分词文本集内的词语标注后,根据所述分词文本集重新组建成文本集得到标准文本集。
所述预先设定的标注规则可采用组合标准规则。如所述原始文本集有文本数据X,X=x_1,x_2,x_3,……,x_n,其中x_1直至x_n表示所述文本数据X中的分词。本申请较佳实施例中,所述标注规则由{B,I,E,S}与{PER,ORG,LOC}两个集合内容两两组合而成,若后续继续加上O符号,则表示不是任何命名实体。进一步地,所述{B,I,E,S}中B表示实体的首字,I表示实体的中字,E表示实体的尾字,S表示单独字符可表示成实体;所述{PER,ORG,LOC}中的PER表示人名,ORG表示机构名,LOC表示地名。
例如,所述原始文本集有文本数据B为:“上海计划到本世纪末实现人均国内生产总值五千美元。”,则经过所述标准处理后得到的标准文本数据为:“上/B-LOC海/E-LOC计/B-O划/E-O到/S-O本/S-O世/B-O纪/E-O末/S-O实/B-O现/E-O人/B-O均/E-O国/B-O内/E-O生/B-O产/E-O总/B-O值/E-O五/B-O千/I-O美/I-O元/E-O/。/S-O”。
步骤二、将所述标准文本集进行聚类操作得到类簇个数及聚类中心。
较佳地,所述聚类操作包括:随机初始化k个初始类簇和所述k个初始类簇的聚类中心Center k,根据聚类更新方法对所述聚类中心进行训练得到训练值,基于平方误差计算所述训练值的误差,若所述误差大于预设误差阈值则继续训练,若所述误差小于预设误差阈值则退出训练得到类簇个数及聚类中心。
所述类簇个数是指所述标准文本集根据所述聚类操作后可得到多少个类别不同的个数,所述聚类中心是指每个类簇所在的中心位置。
进一步地,所述聚类更新方法为:
Figure PCTCN2019117339-appb-000016
其中,x i为所述标准文本集的文本数据,i为数据编号,C k为所述标准文本集。
所述基于平方误差计算所述训练值的误差为:
Figure PCTCN2019117339-appb-000017
其中,J为所述训练值的误差,K为所述标准文本集的文本数量,即所述初始类簇的取值在[1,K]之间,dist(x i,Center k)表示计算所述标准文本集的数据与所述聚类中心Center k的距离。
优选地,所述dist(x i,Center k)的距离计算公式可采用多种方式,如欧式距离计算方法、曼哈顿距离、马氏距离等。
步骤三、基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集。
所述后验概率模型为:
Figure PCTCN2019117339-appb-000018
其中,P(w j|x)为所述后验概率模型,w i为所述标准词向量集中的词向量,x为所述标准文本集的文本,x t为在类簇中心编号为t的文本,j为所述词向量编号,n为所述类簇个数,p(x t|w j)为先验概率,所述先验概率p(x t|w j)为:
Figure PCTCN2019117339-appb-000019
其中,c t表示在类簇中心编号为t的标准文本数,D i表示所述词向量w i构成的样本,
Figure PCTCN2019117339-appb-000020
表示在x t情况下,所述词向量w i构成的样本,所述
Figure PCTCN2019117339-appb-000021
与所述聚类中心有关,α为调节系数。
步骤四、将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型。
优选地,所述预先构建的命名实体识别模型包括句子组合层、连接层和分类层。
较佳地,所述标准词向量集输入至所述句子组合层进行句子组合概率求解得到最大概率的句子组合,将所述最大概率的句子组合输入至所述连接层进行连接操作,基于分类层对所述连接操作完成的句子进行命名实体识别得到识别结果集,将所述识别结果集与所述标准文本集进行比对,直至比对的准确率大于预设准确率,所述命名实体识别模型退出训练得到训练后的命名实体识别模型。
优选地,所述句子组合概率求解方法为:
Figure PCTCN2019117339-appb-000022
Figure PCTCN2019117339-appb-000023
其中,w i,w i+1…w n表示所述标准词向量集的词向量,f LSTM表示基于长短期记忆网络(LSTM)模型下求解词向量最大化概率的模型公式,
Figure PCTCN2019117339-appb-000024
表示最大化的词向量,s j表示所述标准词向量集,R表示所述最大概率的句子组合。
较佳地,所述连接操作为:
Figure PCTCN2019117339-appb-000025
其中,S表示所述连接操作后的句子,R i表示不同的句子组合,
Figure PCTCN2019117339-appb-000026
为所述不同的句子组合的概率,m为所述不同的句子组合的总数,较佳地,所述
Figure PCTCN2019117339-appb-000027
为:
Figure PCTCN2019117339-appb-000028
其中,p(R i|R)表示R i在所有句子组合中出现的概率值,w i表示上述标准词向量集的词向量,
Figure PCTCN2019117339-appb-000029
表示最大化的词向量。
进一步地,所述命名实体识别的方法为:
Figure PCTCN2019117339-appb-000030
其中,softmax(y j)表示基于softmax函数进行的命名实体识别,y j表示词j的词性结果,n为上述类簇个数,S k表示在第k个类簇中心下的句子总数。
步骤五、接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
优选地,计算所述文本集得到词向量集可根据步骤二~步骤三顺序执行得到所述词向量集。
较佳地,如用户输入的文本集为:“我从小就爱旅行,旅行已成为我生活中的一部分。古往今来,有无数的名人大家都喜欢“游山玩水”,如中国的徐霞客、班超、张骞,外国的马可波罗等,他们的足迹遍及全世界,也为后人留下了宝贵的财富。我也想像他们那样让自己的足迹遍布世界各地的名山大川,如珠穆朗玛峰,自由女神像等”,经过所述训练后的命名实体识别模型后得到了命名实体集合为:“徐霞客、班超、张骞、马可波罗、珠穆朗玛峰、自由女神像”。
可选地,在其他实施例中,中文命名实体识别程序还可以被分割为一个或者多个模块,一个或者多个模块被存储于存储器11中,并由一个或多个处理器(本实施例为处理器12)所执行以完成本申请,本申请所称的模块是指能够完成特定功能的一系列计算机程序指令段,用于描述中文命名实体识别程序在中文命名实体识别装置中的执行过程。
例如,参照图3所示,为本申请中文命名实体识别装置一实施例中的中文命名实体识别程序的程序模块示意图,该实施例中,所述中文命名实体识别程序可以被分割为数据接收及处理模块10、类簇个数、聚类中心和词向量计算模块20、命名实体识别模型训练模块30、命名识别结果输出模块40示例性地:
所述数据接收及处理模块10用于:接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集。
所述类簇个数、聚类中心和词向量计算模块20用于:将所述标准文本集进行聚类操作得到类簇个数及聚类中心,基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集。
所述命名实体识别模型训练模块30用于:将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型。
所述命名识别结果输出模块40用于:接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
上述数据接收及处理模块10、类簇个数、聚类中心和词向量计算模块20、命名实体识别模型训练模块30、命名识别结果输出模块40等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有中文命名实体识别程序,所述中文命名实体识别程序可被一个或多个处理器执行,以实现如下操作:
接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集。
将所述标准文本集进行聚类操作得到类簇个数及聚类中心,基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集。
将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型。
接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
需要说明的是,上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (20)

  1. 一种中文命名实体识别方法,其特征在于,所述方法包括:
    接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集;
    将所述标准文本集进行聚类操作得到类簇个数及聚类中心;
    基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集;
    将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型;
    接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
  2. 如权利要求1所述的中文命名实体识别方法,其特征在于,所述标注处理包括:
    将所述原始文本集进行分词处理得到分词文本集;
    根据预先设定的标注规则对所述分词文本集内的词语进行标注;
    根据标注之后的所述分词文本集重新组建成文本集得到标准文本集。
  3. 如权利要求1或2所述的中文命名实体识别方法,其特征在于,所述聚类操作包括:
    随机初始化k个初始类簇和所述k个初始类簇的聚类中心Center k
    根据聚类更新方法对所述聚类中心Center k进行训练得到训练值;
    基于平方误差计算所述训练值的误差,若所述误差大于预设误差阈值则继续训练,若所述误差小于预设误差阈值则退出训练得到训练后的类簇个数及聚类中心。
  4. 如权利要求3所述的中文命名实体识别方法,其特征在于,所述聚类更新方法为:
    Figure PCTCN2019117339-appb-100001
    其中,x i为所述标准文本集的数据,i为编号,C k为所述标准文本集;
    所述基于平方误差计算所述训练值的误差为:
    Figure PCTCN2019117339-appb-100002
    其中,J为所述训练值的误差,K为所述标准文本集的文本数量,即所述初始类簇的取值在[1,K]之间,dist(x i,Center k)表示计算所述标准文本集的数据x i与所述聚类中心Center k的距离。
  5. 如权利要求1所述的中文命名实体识别方法,其特征在于,所述后验概率模型为:
    Figure PCTCN2019117339-appb-100003
    其中,P(w j|x)为后验概率模型,w i为标准词向量集中的词向量,x为标准文本集的文本,x t为在类簇中心编号为t的文本,j为词向量编号,n为类簇个数,p(x t|w j)为先验概率,所述先验概率p(x t|w j)为:
    Figure PCTCN2019117339-appb-100004
    其中,c t表示在类簇中心编号为t的标准文本数,D i表示词向量w i构成的样本,
    Figure PCTCN2019117339-appb-100005
    表示在x t情况下,词向量w i构成的样本,α为调节系数。
  6. 如权利要求1所述的中文命名实体识别方法,其特征在于,所述预先构建的命名实体识别模型包括句子组合层、连接层和分类层;及
    所述将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型,包括:
    将所述标准词向量集输入至所述句子组合层进行句子组合概率求解得到最大概率的句子组合;
    将所述最大概率的句子组合输入至所述连接层进行连接操作;
    基于分类层对所述连接操作完成的句子进行命名实体识别得到识别结果集;
    将所述识别结果集与所述标准文本集进行比对,直至比对的准确率大于预设准确率,所述命名实体识别模型退出训练得到训练后的命名实体识别模型。
  7. 如权利要求6所述的中文命名实体识别方法,其特征在于,所述命名实体识别的计算公式为:
    Figure PCTCN2019117339-appb-100006
    其中,softmax(y j)表示基于softmax函数进行的命名实体识别,y j表示词j的词性结果,n为类簇个数,S k表示在第k个类簇中心下的句子总数。
  8. 一种中文命名实体识别装置,其特征在于,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的中文命名实体识别程序,所述中文命名实体识别程序被所述处理器执行时实现如下步骤:
    接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集;
    将所述标准文本集进行聚类操作得到类簇个数及聚类中心;
    基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集;
    将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型;
    接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
  9. 如权利要求8所述的中文命名实体识别装置,其特征在于,所述标注处理包括:
    将所述原始文本集进行分词处理得到分词文本集;
    根据预先设定的标注规则对所述分词文本集内的词语进行标注;
    根据标注之后的所述分词文本集重新组建成文本集得到标准文本集。
  10. 如权利要求8或9所述的中文命名实体识别装置,其特征在于,所述聚类操作包括:
    随机初始化k个初始类簇和所述k个初始类簇的聚类中心Center k
    根据聚类更新方法对所述聚类中心Center k进行训练得到训练值;
    基于平方误差计算所述训练值的误差,若所述误差大于预设误差阈值则继续训练,若所述误差小于预设误差阈值则退出训练得到训练后的类簇个数及聚类中心。
  11. 如权利要求10中所述的中文命名实体识别装置,其特征在于,所述聚类更新方法为:
    Figure PCTCN2019117339-appb-100007
    其中,x i为所述标准文本集的数据,i为编号,C k为所述标准文本集;
    所述基于平方误差计算所述训练值的误差为:
    Figure PCTCN2019117339-appb-100008
    其中,J为所述训练值的误差,K为所述标准文本集的文本数量,即所述初始类簇的取值在[1,K]之间,dist(x i,Center k)表示计算所述标准文本集的数据x i与所述聚类中心Center k的距离。
  12. 如权利要求8所述的中文命名实体识别装置,其特征在于,所述后验概率模型为:
    Figure PCTCN2019117339-appb-100009
    其中,P(w j|x)为后验概率模型,w i为标准词向量集中的词向量,x为标准文本集的文本,x t为在类簇中心编号为t的文本,j为词向量编号,n为类簇个数,p(x t|w j)为先验概率,所述先验概率p(x t|w j)为:
    Figure PCTCN2019117339-appb-100010
    其中,c t表示在类簇中心编号为t的标准文本数,D i表示词向量w i构成的样本,
    Figure PCTCN2019117339-appb-100011
    表示在x t情况下,词向量w i构成的样本,α为调节系数。
  13. 如权利要求8所述的中文命名实体识别装置,其特征在于,所述预先构建的命名实体识别模型包括句子组合层、连接层和分类层;及
    所述将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型,包括:
    将所述标准词向量集输入至所述句子组合层进行句子组合概率求解得到最大概率的句子组合;
    将所述最大概率的句子组合输入至所述连接层进行连接操作;
    基于分类层对所述连接操作完成的句子进行命名实体识别得到识别结果集;
    将所述识别结果集与所述标准文本集进行比对,直至比对的准确率大于预设准确率,所述命名实体识别模型退出训练得到训练后的命名实体识别模 型。
  14. 如权利要求13所述的中文命名实体识别装置,其特征在于,所述命名实体识别的计算公式为:
    Figure PCTCN2019117339-appb-100012
    其中,softmax(y j)表示基于softmax函数进行的命名实体识别,y j表示词j的词性结果,n为类簇个数,S k表示在第k个类簇中心下的句子总数。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有中文命名实体识别程序,所述中文命名实体识别程序可被一个或者多个处理器执行,以实现如下步骤:
    接收包含中文命名实体的原始文本集,将所述原始文本集进行去噪、去停用词及标注处理后得到标准文本集;
    将所述标准文本集进行聚类操作得到类簇个数及聚类中心;
    基于所述类簇个数及所述聚类中心,将所述标准文本集建立后验概率模型,优化所述后验概率模型得到标准词向量集;
    将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型;
    接收用户输入的文本集,计算所述文本集得到词向量集,将所述词向量集输入至所述训练后的命名实体识别模型得到命名识别结果。
  16. 如权利要求15所述的计算机可读存储介质,其特征在于,所述标注处理包括:
    将所述原始文本集进行分词处理得到分词文本集;
    根据预先设定的标注规则对所述分词文本集内的词语进行标注;
    根据标注之后的所述分词文本集重新组建成文本集得到标准文本集。
  17. 如权利要求15或16所述的计算机可读存储介质,其特征在于,所述聚类操作包括:
    随机初始化k个初始类簇和所述k个初始类簇的聚类中心Center k
    根据聚类更新方法对所述聚类中心Center k进行训练得到训练值;
    基于平方误差计算所述训练值的误差,若所述误差大于预设误差阈值则继续训练,若所述误差小于预设误差阈值则退出训练得到训练后的类簇个数及聚类中心。
  18. 如权利要求17所述的计算机可读存储介质,其特征在于,所述聚类更新方法为:
    Figure PCTCN2019117339-appb-100013
    其中,x i为所述标准文本集的数据,i为编号,C k为所述标准文本集;
    所述基于平方误差计算所述训练值的误差为:
    Figure PCTCN2019117339-appb-100014
    其中,J为所述训练值的误差,K为所述标准文本集的文本数量,即所述初始类簇的取值在[1,K]之间,dist(x i,Center k)表示计算所述标准文本集的数据x i与所述聚类中心Center k的距离。
  19. 如权利要求15所述的计算机可读存储介质,其特征在于,所述后验概率模型为:
    Figure PCTCN2019117339-appb-100015
    其中,P(w j|x)为后验概率模型,w i为标准词向量集中的词向量,x为标准文本集的文本,x t为在类簇中心编号为t的文本,j为词向量编号,n为类簇个数,p(x t|w j)为先验概率,所述先验概率p(x t|w j)为:
    Figure PCTCN2019117339-appb-100016
    其中,c t表示在类簇中心编号为t的标准文本数,D i表示词向量w i构成的样本,
    Figure PCTCN2019117339-appb-100017
    表示在x t情况下,词向量w i构成的样本,α为调节系数。
  20. 如权利要求15所述的计算机可读存储介质,其特征在于,所述预先构建的命名实体识别模型包括句子组合层、连接层和分类层;及
    所述将所述标准词向量集输入至预先构建的命名实体识别模型中训练得到训练后的命名实体识别模型,包括:
    将所述标准词向量集输入至所述句子组合层进行句子组合概率求解得到最大概率的句子组合;
    将所述最大概率的句子组合输入至所述连接层进行连接操作;
    基于分类层对所述连接操作完成的句子进行命名实体识别得到识别结果集;
    将所述识别结果集与所述标准文本集进行比对,直至比对的准确率大于预设准确率,所述命名实体识别模型退出训练得到训练后的命名实体识别模型。
PCT/CN2019/117339 2019-10-10 2019-11-12 中文命名实体识别方法、装置及计算机可读存储介质 WO2021068329A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910965462.X 2019-10-10
CN201910965462.XA CN110909548B (zh) 2019-10-10 2019-10-10 中文命名实体识别方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2021068329A1 true WO2021068329A1 (zh) 2021-04-15

Family

ID=69815495

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/117339 WO2021068329A1 (zh) 2019-10-10 2019-11-12 中文命名实体识别方法、装置及计算机可读存储介质

Country Status (2)

Country Link
CN (1) CN110909548B (zh)
WO (1) WO2021068329A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113255355A (zh) * 2021-06-08 2021-08-13 北京明略软件系统有限公司 文本信息中的实体识别方法、装置、电子设备和存储介质
CN113516196A (zh) * 2021-07-20 2021-10-19 云知声智能科技股份有限公司 命名实体识别数据增强的方法、装置、电子设备和介质
CN113515938A (zh) * 2021-05-12 2021-10-19 平安国际智慧城市科技股份有限公司 语言模型训练方法、装置、设备及计算机可读存储介质
CN113571052A (zh) * 2021-07-22 2021-10-29 湖北亿咖通科技有限公司 一种噪声提取及指令识别方法和电子设备
CN113707300A (zh) * 2021-08-30 2021-11-26 康键信息技术(深圳)有限公司 基于人工智能的搜索意图识别方法、装置、设备及介质
CN113836305A (zh) * 2021-09-29 2021-12-24 有米科技股份有限公司 基于文本的行业类别识别方法及装置
CN114741483A (zh) * 2022-06-09 2022-07-12 浙江香侬慧语科技有限责任公司 数据识别的方法和装置
CN115905456A (zh) * 2023-01-06 2023-04-04 浪潮电子信息产业股份有限公司 一种数据识别方法、系统、设备及计算机可读存储介质
CN115964658A (zh) * 2022-10-11 2023-04-14 北京睿企信息科技有限公司 一种基于聚类的分类标签更新方法及系统
CN117114004A (zh) * 2023-10-25 2023-11-24 江西师范大学 一种基于门控纠偏的少样本两阶段命名实体识别方法
CN117252202A (zh) * 2023-11-20 2023-12-19 江西风向标智能科技有限公司 高中数学题目中命名实体的构建方法、识别方法和系统

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909548B (zh) * 2019-10-10 2024-03-12 平安科技(深圳)有限公司 中文命名实体识别方法、装置及计算机可读存储介质
CN111967437A (zh) * 2020-09-03 2020-11-20 平安国际智慧城市科技股份有限公司 文本识别方法、装置、设备及存储介质
CN112215006B (zh) * 2020-10-22 2022-08-09 上海交通大学 机构命名实体归一化方法和系统
CN112269875B (zh) * 2020-10-23 2023-07-25 中国平安人寿保险股份有限公司 文本分类方法、装置、电子设备及存储介质
CN113283242B (zh) * 2021-05-31 2024-04-26 西安理工大学 一种基于聚类与预训练模型结合的命名实体识别方法
CN114647727A (zh) * 2022-03-17 2022-06-21 北京百度网讯科技有限公司 应用于实体信息识别的模型训练方法、装置和设备
CN115713083B (zh) * 2022-11-23 2023-12-15 北京约来健康科技有限公司 一种中医药文本关键信息的智能抽取方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8140330B2 (en) * 2008-06-13 2012-03-20 Robert Bosch Gmbh System and method for detecting repeated patterns in dialog systems
CN109753653A (zh) * 2018-12-25 2019-05-14 金蝶软件(中国)有限公司 实体名称识别方法、装置、计算机设备和存储介质
CN109871545A (zh) * 2019-04-22 2019-06-11 京东方科技集团股份有限公司 命名实体识别方法及装置
CN109902307A (zh) * 2019-03-15 2019-06-18 北京金山数字娱乐科技有限公司 命名实体识别方法、命名实体识别模型的训练方法及装置
CN110287479A (zh) * 2019-05-20 2019-09-27 平安科技(深圳)有限公司 命名实体识别方法、电子装置及存储介质
CN110909548A (zh) * 2019-10-10 2020-03-24 平安科技(深圳)有限公司 中文命名实体识别方法、装置及计算机可读存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9292797B2 (en) * 2012-12-14 2016-03-22 International Business Machines Corporation Semi-supervised data integration model for named entity classification
US20150088511A1 (en) * 2013-09-24 2015-03-26 Verizon Patent And Licensing Inc. Named-entity based speech recognition
US11030407B2 (en) * 2016-01-28 2021-06-08 Rakuten, Inc. Computer system, method and program for performing multilingual named entity recognition model transfer
CN108268447B (zh) * 2018-01-22 2020-12-01 河海大学 一种藏文命名实体的标注方法
CN109446517B (zh) * 2018-10-08 2022-07-05 平安科技(深圳)有限公司 指代消解方法、电子装置及计算机可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8140330B2 (en) * 2008-06-13 2012-03-20 Robert Bosch Gmbh System and method for detecting repeated patterns in dialog systems
CN109753653A (zh) * 2018-12-25 2019-05-14 金蝶软件(中国)有限公司 实体名称识别方法、装置、计算机设备和存储介质
CN109902307A (zh) * 2019-03-15 2019-06-18 北京金山数字娱乐科技有限公司 命名实体识别方法、命名实体识别模型的训练方法及装置
CN109871545A (zh) * 2019-04-22 2019-06-11 京东方科技集团股份有限公司 命名实体识别方法及装置
CN110287479A (zh) * 2019-05-20 2019-09-27 平安科技(深圳)有限公司 命名实体识别方法、电子装置及存储介质
CN110909548A (zh) * 2019-10-10 2020-03-24 平安科技(深圳)有限公司 中文命名实体识别方法、装置及计算机可读存储介质

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113515938B (zh) * 2021-05-12 2023-10-20 平安国际智慧城市科技股份有限公司 语言模型训练方法、装置、设备及计算机可读存储介质
CN113515938A (zh) * 2021-05-12 2021-10-19 平安国际智慧城市科技股份有限公司 语言模型训练方法、装置、设备及计算机可读存储介质
CN113255355A (zh) * 2021-06-08 2021-08-13 北京明略软件系统有限公司 文本信息中的实体识别方法、装置、电子设备和存储介质
CN113516196A (zh) * 2021-07-20 2021-10-19 云知声智能科技股份有限公司 命名实体识别数据增强的方法、装置、电子设备和介质
CN113516196B (zh) * 2021-07-20 2024-04-12 云知声智能科技股份有限公司 命名实体识别数据增强的方法、装置、电子设备和介质
CN113571052A (zh) * 2021-07-22 2021-10-29 湖北亿咖通科技有限公司 一种噪声提取及指令识别方法和电子设备
CN113707300A (zh) * 2021-08-30 2021-11-26 康键信息技术(深圳)有限公司 基于人工智能的搜索意图识别方法、装置、设备及介质
CN113836305A (zh) * 2021-09-29 2021-12-24 有米科技股份有限公司 基于文本的行业类别识别方法及装置
CN113836305B (zh) * 2021-09-29 2024-03-22 有米科技股份有限公司 基于文本的行业类别识别方法及装置
CN114741483A (zh) * 2022-06-09 2022-07-12 浙江香侬慧语科技有限责任公司 数据识别的方法和装置
CN114741483B (zh) * 2022-06-09 2022-09-16 浙江香侬慧语科技有限责任公司 数据识别的方法和装置
CN115964658A (zh) * 2022-10-11 2023-04-14 北京睿企信息科技有限公司 一种基于聚类的分类标签更新方法及系统
CN115964658B (zh) * 2022-10-11 2023-10-20 北京睿企信息科技有限公司 一种基于聚类的分类标签更新方法及系统
CN115905456A (zh) * 2023-01-06 2023-04-04 浪潮电子信息产业股份有限公司 一种数据识别方法、系统、设备及计算机可读存储介质
CN117114004B (zh) * 2023-10-25 2024-01-16 江西师范大学 一种基于门控纠偏的少样本两阶段命名实体识别方法
CN117114004A (zh) * 2023-10-25 2023-11-24 江西师范大学 一种基于门控纠偏的少样本两阶段命名实体识别方法
CN117252202B (zh) * 2023-11-20 2024-03-19 江西风向标智能科技有限公司 高中数学题目中命名实体的构建方法、识别方法和系统
CN117252202A (zh) * 2023-11-20 2023-12-19 江西风向标智能科技有限公司 高中数学题目中命名实体的构建方法、识别方法和系统

Also Published As

Publication number Publication date
CN110909548A (zh) 2020-03-24
CN110909548B (zh) 2024-03-12

Similar Documents

Publication Publication Date Title
WO2021068329A1 (zh) 中文命名实体识别方法、装置及计算机可读存储介质
WO2021212682A1 (zh) 知识抽取方法、装置、电子设备及存储介质
WO2020232861A1 (zh) 命名实体识别方法、电子装置及存储介质
WO2021135910A1 (zh) 基于机器阅读理解的信息抽取方法、及其相关设备
WO2019214145A1 (zh) 文本情绪分析方法、装置及存储介质
WO2020252919A1 (zh) 识别简历的方法及装置、计算机设备、存储介质
WO2021109787A1 (zh) 同义词挖掘方法、同义词词典的应用方法、医疗同义词挖掘方法、医疗同义词词典的应用方法、同义词挖掘装置及存储介质
WO2019218514A1 (zh) 网页目标信息的提取方法、装置及存储介质
WO2019041521A1 (zh) 用户关键词提取装置、方法及计算机可读存储介质
CN108460011B (zh) 一种实体概念标注方法及系统
WO2021042516A1 (zh) 命名实体识别方法、装置及计算机可读存储介质
CN112101041B (zh) 基于语义相似度的实体关系抽取方法、装置、设备及介质
CN108804423B (zh) 医疗文本特征提取与自动匹配方法和系统
CN110162771B (zh) 事件触发词的识别方法、装置、电子设备
WO2022222300A1 (zh) 开放关系抽取方法、装置、电子设备及存储介质
CN112287069B (zh) 基于语音语义的信息检索方法、装置及计算机设备
WO2022116435A1 (zh) 标题生成方法、装置、电子设备及存储介质
CN112131881B (zh) 信息抽取方法及装置、电子设备、存储介质
WO2020253043A1 (zh) 智能文本分类方法、装置及计算机可读存储介质
CN109460725B (zh) 小票消费明细内容融合及提取方法、设备以及存储介质
WO2022032917A1 (zh) 一种基于RNN的Webshell检测方法及装置
WO2021129123A1 (zh) 语料数据处理方法、装置、服务器和存储介质
WO2021174864A1 (zh) 基于少量训练样本的信息抽取方法及装置
WO2022160454A1 (zh) 医疗文献的检索方法、装置、电子设备及存储介质
WO2021000391A1 (zh) 文本智能化清洗方法、装置及计算机可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19948464

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19948464

Country of ref document: EP

Kind code of ref document: A1