CN107797988A - A kind of mixing language material name entity recognition method based on Bi LSTM - Google Patents

A kind of mixing language material name entity recognition method based on Bi LSTM Download PDF

Info

Publication number
CN107797988A
CN107797988A CN201710947002.5A CN201710947002A CN107797988A CN 107797988 A CN107797988 A CN 107797988A CN 201710947002 A CN201710947002 A CN 201710947002A CN 107797988 A CN107797988 A CN 107797988A
Authority
CN
China
Prior art keywords
data
label
character
lstm
word
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201710947002.5A
Other languages
Chinese (zh)
Inventor
唐华阳
岳永鹏
刘林峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Future Information Technology Co Ltd
Original Assignee
Beijing Future Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Future Information Technology Co Ltd filed Critical Beijing Future Information Technology Co Ltd
Priority to CN201710947002.5A priority Critical patent/CN107797988A/en
Publication of CN107797988A publication Critical patent/CN107797988A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs

Abstract

The present invention relates to a kind of mixing language material based on Bi LSTM to name entity recognition method.This method will train mixing language material data to be converted to the mixing corpus data of character level in the training stage with label, then train the deep learning model based on Bi LSTM;The test mixing corpus data of no label is converted to the mixing corpus data of character level in forecast period, the deep learning model then trained using the training stage is predicted.The present invention, can be against the influence of the precision of word segmentation, while the problem of can also evade unregistered word using the vector of character level rather than word-level;Using two-way shot and long term Memory Neural Networks Bi LSTM, compared to the precision that traditional algorithm can greatly improve name Entity recognition;Directly model training is carried out using mixing language material, it is not necessary to each languages for mixing language material are detected and separated, eventually arrive at the purpose that can identify mixing language material.

Description

Bi-LSTM-based mixed corpus named entity identification method
Technical Field
The invention belongs to the technical field of information, and particularly relates to a Bi-LSTM-based mixed corpus named entity identification method.
Background
Named Entity Recognition (NER) refers to an Entity with specific meaning in a Recognition text, and mainly comprises a name of a person, a name of a place, a name of an organization, a proper noun and the like.
The practical scene of the recognition method of the named entity comprises the following steps:
scene 1: and detecting an event. The place, time and person are several basic components of time, and when constructing the abstract of the event, the relevant person, place, unit and the like can be highlighted. In the event search system, related people, time and place can be used as index keywords. The relationship between several constituent parts of an event describes the event in more detail from a semantic level.
Scene 2: and (5) information retrieval. Named entities can be used to enhance and improve the effectiveness of the search system, and when a user enters "significant," it can be found that the user prefers to search for "Chongqing university," rather than its corresponding adjective meaning. In addition, when the inverted index is built, if the named entity is cut into multiple words, the query efficiency is reduced. In addition, search engines are evolving towards semantic understanding, computing answers.
Scene 3: and (4) semantic network. Concepts and instances and their corresponding relationships are generally included in a semantic network, for example, "country" is a concept, china is an instance, and "china" is a "country" that expresses the relationship between entities and concepts. A large part of the instances in a semantic network are named entities.
Scene 4: and (4) machine translation. The translation of a named entity often has some special translation rules, for example, chinese people translate to English by using Pinyin of first names, first and last names, and common words translate to corresponding English words. The named entities in the text are accurately identified, and the method has important significance for improving the effect of machine translation.
Scene 5: a question answering system. It is particularly important to accurately identify the various components of the problem, the relevant domain of the problem, and the relevant concepts. At present, most of the question-answering systems can only search answers and cannot calculate answers. The search answers are matched with keywords, the user manually extracts answers according to the search results, and a more friendly mode is to calculate and present the answers to the user. Some questions in the question-answering system need to consider the relationship between entities, such as "the forty-five president" in the united states, and the current search engine returns the answer "terlangpu" in a special format.
The conventional entity recognition method for mixed texts containing multiple languages comprises the following steps:
multilingual input text- > (segmented or sentence) text language check- > entity recognition
And its entity recognition for each language can employ dictionary-based, statistical-based, and artificial neural network model-based approaches. Dictionary-based named entity recognition, the principle of which is roughly: the method comprises the steps of collecting as many entity vocabularies of different categories as possible into a dictionary, matching text information with words in the dictionary during recognition, and marking the matched entity vocabularies as corresponding entity categories; the principle of the method based on word frequency statistics, such as CRF (conditional random field), is to learn semantic information of a word before and after the word, and then make classification judgment.
The above method has the following disadvantages:
disadvantage 1: the granularity of detection for multiple languages is not well differentiated, and there is a loss of precision in word segmentation because a certain language is not detected. For a document containing multiple languages, segmentation processing is needed first, and then language type detection is performed on each paragraph, however, if the paragraph also contains multiple languages, sentence segmentation processing is needed, and the sentence containing multiple languages cannot be segmented. Because the models and corpora of the participles are heavily dependent, the result is that information of the participles is lost because a certain language is not detected.
And (2) disadvantage: HMM (hidden Markov) and CRF (conditional random field) methods based on word frequency statistics can only correlate the semantics of the previous word of the current word, the recognition accuracy is not high enough, and especially the recognition rate of unknown words is low;
disadvantage 3: the method based on the artificial neural network model has the problem of gradient disappearance during training, the number of network layers is small in practical application, and the advantages of the final named entity recognition result are not obvious.
Disclosure of Invention
Aiming at the problems, the invention provides a Bi-directional Long Short-Term Memory neural network (Bi-LSTM) -based mixed corpus named entity recognition method, which can effectively improve the recognition precision of the mixed corpus named entity.
In the invention, the mixed corpus refers to the corpus data of at least two languages contained in the training or prediction data; the entry word refers to a word that has appeared in the corpus vocabulary; an unknown word refers to a word that does not appear in the corpus vocabulary.
The technical scheme adopted by the invention is as follows:
a Bi-LSTM-based mixed corpus named entity recognition method comprises the following steps:
1) Converting the original mixed corpus data OrgData into character-level mixed corpus data NewData;
2) Counting characters in New data to obtain a character set CharSet, numbering each character to obtain a character number set CharID corresponding to the character set CharSet; counting labels of characters in NewData to obtain a label set LabelSet, numbering each label to obtain a label number set LabelID corresponding to the label set LabelSet;
3) Grouping the sentences by the NewData according to the sentence length to obtain a data set GroupData comprising n groups of sentences;
4) Randomly and unreleased extracting BatchSize sentence data w and corresponding label y from a certain group of GroupData, converting the extracted data w into data BatchData with fixed length by CharID, and converting the corresponding label into label y with fixed length by LabeliD ID
5) Data BatchData and label y ID Feeding the deep learning model based on Bi-LSTM, training the parameters of the deep learning model, and performing deep learningIf the loss value generated by the model meets the set condition or reaches the maximum iteration number N, terminating the training of the deep learning model; otherwise, adopting the step 4) to regenerate data to train the deep learning model;
6) And converting the data PreData to be predicted into data PreMData matched with the deep learning model, and sending the data PreMData into the trained deep learning model to obtain a named entity recognition result OrgResult.
Further, step 1) comprises:
1-1) separating data from labels in original mixed corpus data, and performing character level segmentation on each word of the data;
1-2) marking each character by adopting a marking mode of BMESO: and if the Label corresponding to a certain word is Label, the character positioned at the beginning of the word is labeled as Label _ B, the character positioned in the middle of the word is labeled as Label _ M, the word positioned at the end of the word is labeled as Label _ E, if the word only has one character, the word is labeled as Label _ S, and if the word is not labeled or does not belong to the entity Label, the word is labeled as o.
Further, in step 3), let l i If the length of the ith sentence is expressed, l will be i -l j Sentences with | less than δ are grouped together, where δ represents the sentence length interval.
Further, step 4) comprises:
4-1) converting the extracted data w into numbers, namely converting each character in w into a corresponding number through the corresponding relation between CharSet and CharID;
4-2) converting the label y corresponding to the extracted data w into a number, namely converting each character in y into a corresponding number through the corresponding relation between LabelSet and LabelID;
4-3) assuming that the specified length is maxLen, when the length l of the extracted data sentence is less than maxLen, supplementing the rear of the sentence with maxLen-l 0 to obtain BatchData, and supplementing the rear of the tag y corresponding to w with maxLen-l 0 to obtain y ID
Further, the step 5) of the Bi-LSTM-based deep learning model includes:
the Embedding layer is used for converting input character data into vectors;
the Bi-LSTM layer comprises a plurality of forward and reverse LSTM units and is used for extracting semantic relations among characters;
the Concatenate layer is used for splicing semantic information extracted by the forward LSTM unit and the reverse LSTM unit together;
DropOut layer to prevent model overfitting.
And a SoftMax layer for classifying each character.
The Bi-LSTM-based mixed corpus named entity recognition method adopts vectors at a character level instead of a word level, so that the influence of word segmentation precision can be avoided, and the problem of unregistered words can be avoided; in addition, the Bi-LSTM is adopted, so that compared with the traditional algorithm, the named entity recognition precision can be greatly improved; the mixed corpus is directly used for model training, each language of the mixed corpus is not required to be detected and separated, and the purpose of recognizing the mixed corpus can be achieved finally.
Drawings
FIG. 1 is a flow chart of the steps of the method of the present invention.
FIG. 2 is a schematic diagram of a deep learning model.
Fig. 3 is a schematic diagram of an lstm cell.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, the present invention shall be described in further detail with reference to the following detailed description and accompanying drawings.
The invention discloses a Bi-LSTM-based mixed corpus named entity identification method. Named entities such as person names, place names, and organizational names are identified in corpus data used in a mixture of languages. The core problems of the present invention include three: 1. efficiency of mixed corpus recognition, 2, accuracy of named entity recognition, and 3, accuracy of unknown word recognition.
In order to solve the problem of unknown words, the invention abandons the traditional word list method, but adopts the thought based on word vectors, and the word vectors are based on characters instead of words. In order to solve the problem of low recognition precision of the traditional named entity, the method adopts the idea of deep learning and utilizes a bidirectional long-short term memory neural network model (Bi-LSTM) to recognize the named entity. In order to solve the problems that the mixed corpus recognition efficiency is low and the language detection of each character is avoided, the mixed corpus is put into a deep learning model together for training.
The flow of the mixed corpus named entity recognition method of the present invention is shown in fig. 1. The method is divided into two stages: a training phase and a prediction phase.
A training stage: (the left dotted line of the flow chart)
Step 1: and converting the training mixed corpus data with the labels into mixed corpus data at a character level.
Step 2: the deep learning model was trained using Adam gradient descent algorithm. In addition, other algorithms, such as SGD (stochastic gradient descent) algorithm, can be used to train the deep learning model.
(II) a prediction stage: (the right dotted line frame of the flow chart)
Step 1: and converting the test mixed corpus data without the label into mixed corpus data at a character level.
Step 2: and predicting by using the deep learning model trained in the training stage.
The specific implementation of the two stages is described in detail below.
A training stage:
step 1-1: the original corpus data OrgData is converted into the character-level corpus data NewData. The method specifically comprises the following steps:
step 1-1-1: separating the data and the labels in the original corpus data, and performing character level segmentation on each word of the data.
For example, the raw data is "[ Zhang III ]/pre [ gradated ]/o [ from ]/o [ Harvard university ]/org [. H "/o", after data tag separation:
the data are as follows: "[ Zhang three ] [ gradated ] [ from ] [ Harvard university ] [. ]"
The label is as follows: "pre o org o"
After the data is segmented according to the character level, the method comprises the following steps: "[ Zhang three ] [ g r a d a t e d ] [ f r o m ] [ Harvard university ] [. ]"
Step 1-1-2: each character is marked using BMESO (Begin, middle, end, single, other) marking (Other marking may be used). And if the Label corresponding to a certain word is Label, the character positioned at the beginning of the word is labeled Label _ B, the character positioned in the middle of the word is labeled Label _ M, the word positioned at the end of the word is labeled Label _ E, if the word only has one character, the word is labeled Label _ S, and if the word is not labeled or does not belong to an entity Label, the word is labeled o.
For example, the label of each character corresponding to the data converted into the character-level data in step 1-1-1 is: "pre _ B pre _ E o _ B o _ M o _ M o _ M o _ M o _ M o _ M o _ E o _ B o _ M o _ M o _ E org _ B org _ M org _ M org _ E o _ S".
Step 1-2: the character set CharSet of NewData is counted, and in order to avoid encountering an unknown character in prediction, a special symbol 'null' is added in the CharSet. And numbering each character in an increasing order according to the natural number to obtain a character number set CharID corresponding to the character set CharSet.
For example, in the example of step 1-1, the statistical CharSet is: { null, zhang, san, g, r, a, d, t, e, f, r, o, m, ha, buddha, dai, school. The punctuation mark is counted in; charID is: { null:0, tension: 1, triple: 2, g. :17}.
And counting the label sets LabelSet, numbering each label, and generating a corresponding label number set LabelID.
For example, in step 1-1, the LabelSet after statistics is: { pre _ B, pre _ M, pre _ E, o _ B, o _ M, o _ E, o _ s, org _ B, org _ M, org _ E }; labelID is: { pre _ B:0, pre _M.
Step 1-3: the NewData is divided by sentence length.
Let l i If the length of the ith sentence is expressed, l will be i -l j Sentences with | less than δ are grouped together, where δ represents the sentence length interval. Let the data after grouping be GroupData, set to n groups in total.
Step 1-4: randomly and unreleased extracting BatchSize sentence data w and corresponding label y from a group of group data, converting the extracted data into fixed length data BatchData through CharID, and converting the corresponding label into fixed length label y through LabeliD ID
Converting the extracted data into fixed-length data BatchData through CharID, and converting the corresponding label into fixed-length label y through LabelID ID The specific method comprises the following steps:
step 1-4-1: and converting the extracted data w into numbers, namely converting each character in the w into a corresponding number through the corresponding relation between the CharSet and the CharID.
For example, after the data in step 1-1 is converted into CharID: [1,2,3,4,5,6,5,7,8,6,9,10,11,12,13,14,15,16,17]
Step 1-4-2: and converting the label y corresponding to the extracted data w into a number, namely converting each character in y into a corresponding number through the corresponding relation between LabelSet and LabelID.
For example, after the tag in step 1-1 is converted to LabelID: [0,2,3,4,4,4,4,4,4,5,3,4,4,5,7,8,8,9,6]
Step 1-4-3: assuming that the specified length is maxLen, when the length of the extracted data sentence is l < maxLen, the sentence is followed by maxLen-l 0 s to obtain BatchData. And supplementing maxLen-l 0 behind the label y corresponding to w to obtain y ID
Step 1-5: the data BatchData in the step 3 is sent into a deep learning model to generate a loss function Cost (y', y) ID )。
The deep learning model in the mixed corpus named entity recognition method is shown in fig. 2. Wherein the meaning of each part is explained as follows:
w 1 ~w n : can be intuitively understood as each character in a certain sentence, namely the data w in the step 1-4. But steps 1-4 need to be completed first when the Embedding layer is transferred.
y 1 ~y n : can intuitively understand that each character in a certain sentence corresponds to a prediction label and is used for matching with an actual label y ID The loss value is calculated.
Embedding layer: i.e., an embedding layer, i.e., a vectorization process, for converting input character data into vectors.
Bi-LSTM layer: contains several forward and backward LSTM units for extracting semantic relation between characters.
The Concatenate layer: i.e., the connection layer, for splicing together the semantic information extracted by the forward and backward LSTM.
DropOut layer: i.e. a filter layer, to prevent overfitting of the model.
SoftMax layer: i.e., a classification layer, for finally classifying each character.
The specific steps for training the deep learning model are as follows:
step 1-5-1: the incoming data BatchData is vectorized at the Embedding layer, that is, each character in each piece of data in the data BatchData is converted into BatchVec through a vector table Char2 Vec.
Step 1-5-2: the BatchVec was transferred into the Bi-LSTM layer, detailed as: the first vector in each piece of data is passed into the first LSTM unit in the forward direction, the second vector in the forward direction is passed into the second LSTM unit, and so on. Meanwhile, the input of the ith LSTM unit in the forward direction also comprises the output of the ith-1 LSTM unit in the forward direction in addition to the ith vector in each piece of data. Then the first vector in each piece of data is transmitted into the first LSTM unit in the reverse direction, the second vector in the reverse direction is transmitted into the second LSTM unit, and so on. The input to the ith, also inverted LSTM unit contains the output of the ith-1, inverted LSTM unit in addition to the ith vector in each datum. Note that each LSTM unit does not receive only one vector at a time, but rather a number of BatchSize vectors.
A more detailed description of the LSTM unit is shown in fig. 3. The meaning of the symbols in fig. 3 is illustrated as follows:
w: characters in input data (e.g., a sentence).
C i-1 ,C i : respectively representing semantic information obtained by accumulating the first i-1 characters and semantic information obtained by accumulating the first i characters.
h i-1 ,h i : respectively representing the characteristic information of the (i-1) th character and the characteristic information of the ith character.
f: forget gate for controlling accumulated semantic information (C) of the first i-1 characters i-1 ) How much is retained.
i: input gate for controlling input data (w and h) i-1 ) How much is retained.
o: and the output gate is used for controlling how much characteristic information is output when the characteristic of the ith character is output.
tan h: hyperbolic tangent function
u is tan h: together with the input gate i, controls how much characteristic information of the ith character remains in C i-1 In (1).
* ,+: respectively, indicating multiplication by bit and addition by bit.
1-5-3: output of each LSTM cell to be forward and backwardAndinto a common layer, i.e. the output results of forward and backward LSTM units are spliced together to form a combined output result
1-5-4: passing the output of the configure layer into the Dropout layer, i.e., randomly passing h i Data hiding of middle eta (eta is more than or equal to 0 and less than or equal to 1)It is not allowed to continue to pass backwards.
1-5-5: the output of Dropout is passed into the SoftMax layer and the final loss values Cost (y', y) are generated ID ). The specific calculation formula is as follows:
Cost(y′,y ID )=-y ID log(y′)+(1-y ID ) log (1-y') (equation 1)
Where y' represents the output of BatchData after passing through the deep learning model classification layer (SoftMax layer), corresponding to y in FIG. 2 1 , 2 ,…, n 。y ID Representing the corresponding real label.
Step 1-6: parameters of the deep learning model are trained using the Adam gradient descent algorithm,
step 1-7: if Cost (y', y) generated by the deep learning model ID ) If the number of times of iteration is not reduced or the maximum number of times of iteration N is reached, terminating the training of the deep learning model; otherwise, jumping to the step 1-4.
Among them, cost i ′(y′,y ID ) Represents the loss value, cost (y', y) at the first i iterations ID ) Representing the loss value produced by the current iteration. The meaning of this equation is that if the difference between the current loss value and the average of the previous M loss values is less than the threshold θ, it is considered not to decrease any more.
(II) a prediction stage:
step 2-1: the data PreData to be predicted is converted into a data format PreMData matched with the deep learning model. The method specifically comprises the following steps: the data to be predicted is converted into digital data at the character level.
Step 2-2: and (4) sending the PreMData into a deep learning model trained in a training stage, and obtaining a prediction result OrgResult.
The deep learning model is a deep learning model trained in the training stage, but in prediction, the parameter η =1 of the DropOut layer involved in the deep learning model indicates that no data is hidden, and all the data are transmitted to the next layer.
The accuracy of the invention to the test data is about 91.7%. In the prior art, for example, a dictionary-based method has no way to solve unknown words, that is, the recognition rate of the unknown words is 0, and the accuracy of a statistical-based method or a conventional artificial neural network-based method is about 90%. However, these are all under the situation of language material of the single language, the invention is calculated under the situation of multilingual mixed language material, regarding to process each language separately after separating the language, the invention can realize the unified processing, under the situation of equivalent precision, the processing efficiency has been improved a lot.
The above embodiments are only intended to illustrate the technical solution of the present invention and not to limit the same, and a person skilled in the art can modify the technical solution of the present invention or substitute the same without departing from the spirit and scope of the present invention, and the scope of the present invention should be determined by the claims.

Claims (10)

1. A Bi-LSTM-based mixed corpus named entity recognition method is characterized by comprising the following steps:
1) Converting the original mixed corpus data OrgData into character-level mixed corpus data NewData;
2) Counting characters in the New data to obtain a character set CharSet, numbering each character to obtain a character number set CharID corresponding to the character set CharSet; counting labels of characters in NewData to obtain a label set LabelSet, numbering each label to obtain a label number set LabelID corresponding to the label set LabelSet;
3) Grouping the sentences by the NewData according to the sentence length to obtain a data set GroupData comprising n groups of sentences;
4) Randomly and unreleased extracting BatchSize sentence data w and corresponding label y from a certain group of GroupData, converting the extracted data w into data BatchData with fixed length by CharID, and converting the corresponding label into label y with fixed length by LabeliD ID
5) Data BatchData and label y ID Sending the deep learning model based on the Bi-LSTM, training parameters of the deep learning model, and terminating the training of the deep learning model when a loss value generated by the deep learning model meets a set condition or reaches the maximum iteration number N; otherwise, adopting the step 4) to regenerate data to train the deep learning model;
6) And converting the data PreData to be predicted into data PreMData matched with the deep learning model, and sending the data PreMData into the trained deep learning model to obtain a named entity recognition result OrgResult.
2. The method of claim 1, wherein step 1) comprises:
1-1) separating data from tags in original mixed corpus data, and performing character-level segmentation on each word of the data;
1-2) marking each character by adopting a marking mode of BMESO: and if the Label corresponding to a certain word is Label, the character positioned at the beginning of the word is labeled as Label _ B, the character positioned in the middle of the word is labeled as Label _ M, the word positioned at the end of the word is labeled as Label _ E, if the word only has one character, the word is labeled as Label _ S, and if the word is not labeled or does not belong to the entity Label, the word is labeled as o.
3. The method of claim 1, wherein in step 3), let l i If the length of the ith sentence is expressed, l will be i -l j Sentences with | less than δ are grouped together, where δ represents the sentence length interval.
4. The method of claim 1, wherein step 4) comprises:
4-1) converting the extracted data w into numbers, namely converting each character in w into a corresponding number through the corresponding relation between CharSet and CharID;
4-2) converting the label y corresponding to the extracted data w into a number, namely converting each character in y into a corresponding number through the corresponding relation between the LabelSet and the LabelID;
4-3) assuming that the specified length is maxLen, when the length l of the extracted data sentence is less than maxLen, supplementing the rear of the sentence with maxLen-l 0 to obtain BatchData, and supplementing the rear of the tag y corresponding to w with maxLen-l 0 to obtain y ID
5. The method of claim 1, wherein step 5) the Bi-LSTM based deep learning model comprises:
the Embedding layer is used for converting input character data into vectors;
the Bi-LSTM layer comprises a plurality of forward and reverse LSTM units and is used for extracting semantic relations among characters;
the Concatenate layer is used for splicing semantic information extracted by the forward LSTM unit and the reverse LSTM unit together;
DropOut layer to prevent model overfitting.
And a SoftMax layer for classifying each character.
6. The method of claim 5, wherein the step of training the deep learning model of step 5) comprises:
5-1) vectorizing the incoming data BatchData at an Embedding layer, namely converting each character in each piece of data in the data BatchData into BatchVec through a vector table Char2 Vec;
5-2) transferring the BatchVec into the Bi-LSTM layer;
5-3) output of each LSTM cell to forward and reverseAndintroducing a Concatenate layer;
5-4) passing the output of the conditioner layer into the DropOut layer;
5-5) pass the output of Dropout into the SoftMax layer and produce the final loss value.
7. The method of claim 6, wherein step 5-2) passes the first vector in each datum into a first LSTM unit in the forward direction, the second vector in the forward direction into a second LSTM unit, and so on, while the input of the i-th LSTM unit in the forward direction contains the output of the i-1 th LSTM unit in the forward direction in addition to the i-th vector in each datum; then the first vector in each piece of data is transmitted into a first LSTM unit in the reverse direction, the second vector in the reverse direction is transmitted into a second LSTM unit, and so on, the input of the ith LSTM unit in the same reverse direction also comprises the output of the (i-1) th LSTM unit in the reverse direction besides the ith vector in each piece of data; the vectors received at one time by each LSTM unit are BatchSize.
8. The method of claim 6, wherein the loss value is calculated by the formula:
Cost(y′,y ID )=-y ID log(y′)+(1-y ID )log(1-y′),
where y' represents the output of BatchData after passing through the SoftMax layer of the deep learning model, y ID Representing the corresponding real label.
9. The method of claim 8, wherein if the loss value Cost (y', y) ID ) Stopping training the deep learning model when the model is not reduced any more, and judging Cost (y', y) by adopting the following formula ID ) No longer decreases:
wherein, cost' i (y′,y ID ) Represents the loss value, cost (y', y) at the first i iterations ID ) Representing the loss value generated by the current iteration, and if the difference between the current loss value and the average value of the loss values of the previous M times is less than the threshold value theta, the loss value is considered not to be reduced any more.
10. The method of claim 1, wherein step 5) trains parameters of the deep learning model using an Adam gradient descent algorithm.
CN201710947002.5A 2017-10-12 2017-10-12 A kind of mixing language material name entity recognition method based on Bi LSTM Withdrawn CN107797988A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710947002.5A CN107797988A (en) 2017-10-12 2017-10-12 A kind of mixing language material name entity recognition method based on Bi LSTM

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710947002.5A CN107797988A (en) 2017-10-12 2017-10-12 A kind of mixing language material name entity recognition method based on Bi LSTM

Publications (1)

Publication Number Publication Date
CN107797988A true CN107797988A (en) 2018-03-13

Family

ID=61532993

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710947002.5A Withdrawn CN107797988A (en) 2017-10-12 2017-10-12 A kind of mixing language material name entity recognition method based on Bi LSTM

Country Status (1)

Country Link
CN (1) CN107797988A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165279A (en) * 2018-09-06 2019-01-08 深圳和而泰数据资源与云技术有限公司 information extraction method and device
CN109284400A (en) * 2018-11-28 2019-01-29 电子科技大学 A kind of name entity recognition method based on Lattice LSTM and language model
CN109308304A (en) * 2018-09-18 2019-02-05 深圳和而泰数据资源与云技术有限公司 Information extraction method and device
CN111222335A (en) * 2019-11-27 2020-06-02 上海眼控科技股份有限公司 Corpus correction method and device, computer equipment and computer-readable storage medium
CN112380839A (en) * 2020-11-13 2021-02-19 沈阳东软智能医疗科技研究院有限公司 Wrongly written character detection method, device and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853710A (en) * 2013-11-21 2014-06-11 北京理工大学 Coordinated training-based dual-language named entity identification method
US20140236578A1 (en) * 2013-02-15 2014-08-21 Nec Laboratories America, Inc. Question-Answering by Recursive Parse Tree Descent
US20140278951A1 (en) * 2013-03-15 2014-09-18 Avaya Inc. System and method for identifying and engaging collaboration opportunities
CN106569998A (en) * 2016-10-27 2017-04-19 浙江大学 Text named entity recognition method based on Bi-LSTM, CNN and CRF
CN106598950A (en) * 2016-12-23 2017-04-26 东北大学 Method for recognizing named entity based on mixing stacking model

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140236578A1 (en) * 2013-02-15 2014-08-21 Nec Laboratories America, Inc. Question-Answering by Recursive Parse Tree Descent
US20140278951A1 (en) * 2013-03-15 2014-09-18 Avaya Inc. System and method for identifying and engaging collaboration opportunities
CN103853710A (en) * 2013-11-21 2014-06-11 北京理工大学 Coordinated training-based dual-language named entity identification method
CN106569998A (en) * 2016-10-27 2017-04-19 浙江大学 Text named entity recognition method based on Bi-LSTM, CNN and CRF
CN106598950A (en) * 2016-12-23 2017-04-26 东北大学 Method for recognizing named entity based on mixing stacking model

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ONUR KURU等: "CharNER: Character-Level Named Entity Recognition", 《THE 26TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL LINGUISTICS: TECHNICAL PAPERS》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165279A (en) * 2018-09-06 2019-01-08 深圳和而泰数据资源与云技术有限公司 information extraction method and device
CN109308304A (en) * 2018-09-18 2019-02-05 深圳和而泰数据资源与云技术有限公司 Information extraction method and device
CN109284400A (en) * 2018-11-28 2019-01-29 电子科技大学 A kind of name entity recognition method based on Lattice LSTM and language model
CN111222335A (en) * 2019-11-27 2020-06-02 上海眼控科技股份有限公司 Corpus correction method and device, computer equipment and computer-readable storage medium
CN112380839A (en) * 2020-11-13 2021-02-19 沈阳东软智能医疗科技研究院有限公司 Wrongly written character detection method, device and equipment

Similar Documents

Publication Publication Date Title
CN107797987B (en) Bi-LSTM-CNN-based mixed corpus named entity identification method
CN107977353A (en) A kind of mixing language material name entity recognition method based on LSTM-CNN
CN109800310B (en) Electric power operation and maintenance text analysis method based on structured expression
CN107797988A (en) A kind of mixing language material name entity recognition method based on Bi LSTM
CN111931506B (en) Entity relationship extraction method based on graph information enhancement
CN108763510A (en) Intension recognizing method, device, equipment and storage medium
CN107908614A (en) A kind of name entity recognition method based on Bi LSTM
CN110362819B (en) Text emotion analysis method based on convolutional neural network
CN109684642B (en) Abstract extraction method combining page parsing rule and NLP text vectorization
CN107885721A (en) A kind of name entity recognition method based on LSTM
CN107832289A (en) A kind of name entity recognition method based on LSTM CNN
CN111709242B (en) Chinese punctuation mark adding method based on named entity recognition
CN108874896B (en) Humor identification method based on neural network and humor characteristics
CN107967251A (en) A kind of name entity recognition method based on Bi-LSTM-CNN
CN110263325A (en) Chinese automatic word-cut
CN110472548B (en) Video continuous sign language recognition method and system based on grammar classifier
CN107992468A (en) A kind of mixing language material name entity recognition method based on LSTM
CN114282527A (en) Multi-language text detection and correction method, system, electronic device and storage medium
CN113178193A (en) Chinese self-defined awakening and Internet of things interaction method based on intelligent voice chip
CN113590810B (en) Abstract generation model training method, abstract generation device and electronic equipment
CN108536781B (en) Social network emotion focus mining method and system
CN116502628A (en) Multi-stage fusion text error correction method for government affair field based on knowledge graph
CN109543036A (en) Text Clustering Method based on semantic similarity
CN116029305A (en) Chinese attribute-level emotion analysis method, system, equipment and medium based on multitask learning
CN107943783A (en) A kind of segmenting method based on LSTM CNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20180313