CN113723102A - Named entity recognition method and device, electronic equipment and storage medium - Google Patents

Named entity recognition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113723102A
CN113723102A CN202110738499.6A CN202110738499A CN113723102A CN 113723102 A CN113723102 A CN 113723102A CN 202110738499 A CN202110738499 A CN 202110738499A CN 113723102 A CN113723102 A CN 113723102A
Authority
CN
China
Prior art keywords
data
model
meaning
sample
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110738499.6A
Other languages
Chinese (zh)
Other versions
CN113723102B (en
Inventor
孙思
曹锋铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An International Smart City Technology Co Ltd
Original Assignee
Ping An International Smart City Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An International Smart City Technology Co Ltd filed Critical Ping An International Smart City Technology Co Ltd
Priority to CN202110738499.6A priority Critical patent/CN113723102B/en
Publication of CN113723102A publication Critical patent/CN113723102A/en
Application granted granted Critical
Publication of CN113723102B publication Critical patent/CN113723102B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to the field of artificial intelligence, and provides a named entity identification method, which comprises the steps of firstly packaging acquired specification sentences to form a data set, then traversing and acquiring the data set to form sample data, carrying out entity splicing processing on the sample data to form standard data, and then inputting the standard data into a data enhancement model to acquire a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP; and inputting the text to be labeled into the convergence model to acquire entity naming information based on the enhanced information, and combining vocabulary information, n-gram information and entity information of a knowledge base when the attention mechanism of the classical model is modified, so that more prior information is favorable for the precision of naming entity labeling, and the defects that only the adopted dynamic lattice structure cannot be paralleled and cannot be transplanted to other non-time sequence network results when lattice enhanced information is utilized are avoided.

Description

Named entity recognition method and device, electronic equipment and storage medium
Technical Field
The invention relates to the field of artificial intelligence, in particular to a method and a device for recognizing a named entity, electronic equipment and a computer-readable storage medium.
Background
Named entity recognition is one of the most basic tasks of natural language processing tasks, and is one of the most important basic tasks of reading comprehension, dialog systems, machine translation, and the like. The most mainstream existing solution of the NER task model is a dictionary and a model, wherein the dictionary and the model mostly adopt a word-based sequence labeling model such as lstm + crf and bert + crf, but vocabulary information is not used in the NER task of the scheme, and the loss is caused for the information capture of the model; another model that utilizes lexical information, such as lattice-lstm, has the following drawbacks: the calculation speed is low, the word in each sample in the dynamic structure is different, the calculation of the batch matrix cannot be carried out, the calculation speed is low, information loss is easily caused, each word of lattice can only obtain the information of the word ended by the word, and the model can only aim at the timing network of lstm and cannot be transplanted.
Therefore, there is a need for a method and an apparatus for identifying a named entity, which can improve the accuracy of entity labeling and can perform information migration.
Disclosure of Invention
The invention provides a named entity identification method, a named entity identification device, electronic equipment and a computer readable storage medium, which can improve entity marking precision and can carry out information transplantation, and mainly aims to solve the problems that the existing named entity identification model is low in calculation speed and easy to cause information loss.
In order to achieve the above object, the present invention provides a named entity identification method, including:
packaging the acquired specification sentences to form a data set;
traversing to obtain the data set to form sample data, and performing entity splicing processing on the sample data to form standard data;
inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is obtained by performing attention mechanism transformation on a classical model based on NLP;
and inputting the text to be marked into the convergence model to acquire entity naming information based on the enhanced information.
Optionally, the packaging the obtained specification statements to form a data set includes:
obtaining a sample statement;
extracting keywords from the sample sentence to obtain keywords;
acquiring the meaning of the keyword for pre-labeling the keyword;
mapping the keyword meanings with the sample sentences to obtain specification sentences;
packing the specification sentences to form a sentence packet;
and performing data conversion on the statement packet to form a data set.
Optionally, the traversing retrieves the dataset to form sample data, comprising:
performing traversal reading on the data set to obtain original data;
performing code compilation on the original data to form code data;
performing word segmentation processing on the code data to obtain word segments and corresponding positions of the word segments;
performing secondary segmentation on the word segmentation to obtain a corresponding position of a word group and the word group;
uploading the word segmentation, the corresponding position of the word group and the corresponding position of the word group to a knowledge base to form knowledge data;
numbering the knowledge data to form sample data.
Optionally, the performing entity splicing processing on the sample data to form standard data includes:
calling a keyword meaning corresponding to the sample data according to the sample data;
performing extended splicing on the sample data based on the keyword meaning to form guess meaning; wherein the guessing meaning includes a first meaning, a second meaning, and a third meaning;
calling the keyword meanings of the adjacent sample data of the sample data to form first-order keyword meanings, and performing expansion splicing on the first-order keyword meanings to form first-order guess meanings;
acquiring next sample data of the adjacent sample data to form a second-order keyword meaning, and performing expansion splicing on the second-order keyword meaning to form a second-order guess meaning;
selecting a guess meaning from the first meaning, the second meaning and the third meaning based on the first order guess meaning and the second order guess meaning according to a semantic harmony algorithm as a sample semantic meaning of the sample data;
and splicing the sample semantics in the sample data to finish entity splicing processing to form standard data.
Optionally, the inputting the standard data into a data enhancement model to obtain a convergence model includes:
forming a basic model by adopting an NLP classical model structure;
modifying the attention mechanism of the base model to obtain a data enhancement model;
inputting the standard data into the data enhancement model so that the data enhancement model obtains a prediction label according to knowledge data in the standard data and calculates lost data according to the preset label and the sample semantics;
and transmitting the lost data back to the data enhancement model to adjust parameters of the data enhancement model according to the lost data until the data enhancement model converges to obtain a convergence model.
Optionally, the modifying the attention mechanism of the base model to obtain a data enhancement model includes:
acquiring the relative position between two input spans according to the input apan position of the basic model;
obtaining a relative matrix according to the relative position;
fusing the relative positions of the two input spans based on the relative matrix to obtain a fusion matrix;
calculating the original self-attention based on the fusion matrix;
computing a content-content attention mechanism based on the original self-attention and concurrently computing a content-location attention mechanism to form a data enhancement layer;
and fusing and unifying the data enhancement layer and the base model to form a data enhancement model.
Optionally, the inputting the text to be labeled into the convergence model to obtain entity naming information based on the enhanced information includes:
inputting a text to be labeled into the convergence model so that the labeled text generates basic data through the basic model;
performing enhancement processing on the basic data through the data enhancement layer to form data enhancement information;
and acquiring entity naming information aiming at the text to be labeled based on the data enhancement information.
In order to solve the above problem, the present invention further provides a named entity recognition apparatus, including:
the data packing unit is used for packing the acquired specification sentences to form a data set;
the entity splicing unit is used for traversing and acquiring the data set to form sample data and carrying out entity splicing processing on the sample data to form standard data;
the model transformation unit is used for inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is obtained by performing attention mechanism transformation on a classical model based on NLP;
and the entity naming unit is used for inputting the text to be labeled into the convergence model so as to obtain entity naming information based on the enhanced information.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the steps of the named entity identification method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, which stores at least one instruction, wherein the at least one instruction is executed by a processor in an electronic device to implement the named entity identifying method described above.
The embodiment of the invention firstly packages the acquired specification sentences to form a data set, then traverses the data set to form sample data, performs entity splicing processing on the sample data to form standard data, and then inputs the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP; and then inputting the text to be labeled into the convergence model to acquire entity naming information based on the enhanced information, combining vocabulary information, n-gram information and entity information of a knowledge base when the attention mechanism of the classical model is modified, so that more prior information is helpful for naming entity labeling precision, and expanding and utilizing the enhanced information on the basis that the classical model only utilizes the vocabulary, thereby avoiding the defects that only dynamic lattice structures which can be adopted when lattice enhanced information is utilized cannot be paralleled and cannot be transplanted to other non-time sequence network results.
Drawings
Fig. 1 is a schematic flowchart of a named entity identification method according to an embodiment of the present invention;
fig. 2 is a schematic block diagram of a named entity recognition apparatus according to an embodiment of the present invention;
fig. 3 is a schematic diagram of an internal structure of an electronic device implementing a named entity recognition method according to an embodiment of the present invention;
the implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Most models adopt a word-based sequence labeling model such as lstm + crf and bert + crf, but the NER task of the scheme does not use lexical information, which is a loss for model information capture, and a latest model using lexical information such as lattice-lstm has the following defects:
1. the calculation speed is slow: the calculation of the batch matrix can not be carried out by the dynamic structure (each sample has different words), and the calculation speed is low;
2. information loss: information that each word in the lattice-lstm model can only obtain a word ending with that word cannot be migrated, and the model can only be restricted to such timing networks as lstm.
In order to solve the above problems, the present invention provides a named entity recognition method. Fig. 1 is a schematic flow chart of a named entity identification method according to an embodiment of the present invention. The method may be performed by an apparatus, which may be implemented by software and/or hardware.
In this embodiment, the named entity identifying method includes:
s1: packaging the acquired specification sentences to form a data set;
s2: traversing to acquire the data set to form sample data, and performing entity splicing processing on the sample data to form standard data;
s3: inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP;
s4: and inputting the text to be marked into the convergence model to acquire entity naming information based on the enhanced information.
In the embodiment shown in fig. 1, step S1 is a process of packaging the obtained specification sentences to form a data set, wherein the step of packaging the obtained specification sentences to form a data set includes:
s11: obtaining a sample statement;
s12: extracting keywords from the sample sentence to obtain keywords;
s13: acquiring the meaning of the keyword pre-labeled to the keyword;
s14: mapping the keyword meanings with the sample sentences to obtain specification sentences;
s15: packing the specification sentences to form a sentence packet;
s16: performing data conversion on the statement packet to form a data set;
specifically, the sample data may be a series of commonly used sentences or phrases, in this embodiment, the city construction system is taken as an example, and the sample sentences may be sentences related to city construction, smart cities, and smart spaces;
the steps S12 and S13 are processes of extracting keywords from the sample data, that is, obtaining the keywords in the sample sentence, for example, if the sample sentence is "the sun is not hotter than the sun on the first building today", then step S12 is to extract the time keyword "today", the place keyword "the first building" and the form-factor keyword "hotter", and the keywords are noted with the meaning represented by step S13, and the specific labeling manner is not limited, in this embodiment, the meaning is specific meanings such as time, place, person, cause, pass, result, form and the like, the pre-labeling is to label in advance, the labeling is to store the meaning of the keywords together with the keywords, in other words, to describe the meaning of the keywords in a text language, and to store the described sentences and the keywords together, and the specific labeling manner can be manually, the meaning of the keyword can be marked and marked by any tool, and the description is omitted;
step S14 is a process of mapping the meaning of the keyword with the sample data, in short, step S13 can be used as the mapping of the meaning of the keyword with the keyword, step S14 is to establish the mapping of the meaning of the keyword with the sample data, thereby forming the corresponding relationship between the meaning of the keyword and the sample data, i.e., establishing the corresponding relationship between the sample data and the meaning of the keyword, and input the corresponding relationship into the convolutional neural network for repeated training, so that the trained neural network can automatically obtain the meaning of the keyword corresponding to the sample data according to the sample data, and step S14 lays the foundation for forming the standard data in the later stage and inputting the standard data into a data enhancement model to obtain a convergence model.
In the embodiment shown in fig. 1, step S2 includes S21: traversing to acquire the data set to form sample data; s22: the process of entity splicing processing is carried out on the sample data to form standard data; wherein, the step of obtaining the data set to form sample data in a traversal way comprises:
s211: performing traversal reading on the data set to obtain original data;
s212: performing code compilation on the original data to form code data;
s213: performing word segmentation processing on the code data to acquire a word segmentation and a corresponding position of the word segmentation;
s214: carrying out secondary segmentation on the word segmentation to obtain a word group and a corresponding position of the word group;
s215: uploading the word segmentation, the corresponding position of the word group and the corresponding position of the word group to a knowledge base to form knowledge data;
s216: the knowledge data is numbered to form sample data.
Carrying out entity splicing processing on the sample data to form standard data, wherein the entity splicing processing comprises the following steps:
s221: calling a keyword meaning corresponding to the sample data according to the sample data;
s222: performing extended splicing on the sample data based on the keyword meaning to form guess meaning; wherein the guessing meaning includes a first meaning, a second meaning, and a third meaning;
s223: calling the keyword meanings of the adjacent sample data of the sample data to form first-order keyword meanings, and performing expansion splicing on the first-order keyword meanings to form first-order guess meanings; wherein the first order guess meaning includes a first order first meaning, a second order meaning, and a third order meaning;
s224: acquiring next sample data of the adjacent sample data to form a second-order keyword meaning, and performing expansion splicing on the second-order keyword meaning to form a second-order guess meaning; wherein the second order guessing meaning comprises a second order first meaning, a second order second meaning, and a second order third meaning;
s225: selecting a guess meaning from the first meaning, the second meaning and the third meaning based on the first order guess meaning and the second order guess meaning according to a semantic harmony algorithm as a sample semantic meaning of the sample data;
s226: splicing the sample semantics in the sample data to finish entity splicing processing to form standard data;
in this way, to form the standard data, it should be noted that after one sample data generates the standard data, the adjacent sample data and the next sample data mentioned in step S223 and step S224 are sequentially used as the sample data to obtain the sample semantics of the adjacent sample data and the next sample data.
Specifically, in step S213, the word segmentation is performed on the code data by the word segmentation tool to obtain the word segmentation result
Figure RE-GDA0003310946490000071
And corresponding position
Figure RE-GDA0003310946490000072
Wherein
Figure RE-GDA0003310946490000073
Is the position of the last word of the ith word in si; namely, it is
Figure RE-GDA0003310946490000074
Figure RE-GDA0003310946490000075
Figure RE-GDA0003310946490000076
In step S214, the second segmentation is not only a simple second segmentation, but a second type of segmentation, that is, the segmentation is divided into phrases, and how many times the phrases are segmented is not specifically limited, specifically, as shown below, the first second segmentation is performed first:
the sentence is divided into 2 grams to obtain phrase fragments and corresponding positions, namely
Figure RE-GDA0003310946490000077
Figure RE-GDA0003310946490000078
Figure RE-GDA0003310946490000079
In this embodiment, the secondary segmentation further includes an (n-1) th secondary segmentation, that is, an ngram segmentation (nth segmentation), and the segmented segments are matched in the knowledge base to obtain the matched entities, and the positions are marked at the same time
Figure RE-GDA00033109464900000710
Figure RE-GDA00033109464900000711
Figure RE-GDA00033109464900000712
Thus obtaining the corresponding positions of the phrases;
steps S221 to S223 are a process of obtaining sample data, next data of the sample data, and a guessing meaning of the next data of the sample data, and the sample semantics are obtained for the adjacent sample data and the next sample data of the adjacent sample data in the steps S223 and S224 at the auxiliary sample data, then the "adjacent sample data" is used as the sample data, the "next sample data of the adjacent sample data" is used as the adjacent sample data, and so on, so as to obtain the sample semantics of each sample data.
In the embodiment shown in fig. 1, step S3 is to input the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP; the step of inputting the standard data into a data enhancement model to obtain a convergence model, comprising:
s31: forming a basic model by adopting an NLP classical model structure; wherein the basic model is a transformer structure;
s32: modifying the attention mechanism of the base model to obtain a data enhancement model;
s33: inputting the standard data into the data enhancement model so that the data enhancement model obtains a prediction label according to knowledge data in the standard data and calculates lost data according to the preset label and the sample semantics;
s34: and transmitting the lost data back to the data enhancement model to adjust parameters of the data enhancement model according to the lost data until the data enhancement model converges to obtain a convergence model.
Wherein modifying the attention mechanism of the base model to obtain a data enhancement model comprises:
s321: acquiring the relative position between two input spans according to the input apan position of the basic model;
s322: obtaining a relative matrix according to the relative position;
s323: fusing the relative positions of the two input spans based on the relative matrix to obtain a fusion matrix;
s324: calculating the original self-attention based on the fusion matrix;
s325: computing a content-content attention mechanism based on the original self-attention and concurrently computing a content-location attention mechanism to form a data enhancement layer;
s326: and fusing and unifying the data enhancement layer and the base model to form a data enhancement model.
Specifically, in step S31, the basic model P is:
p=softmax(transformer([sinput,li-sta,li-end])))
y′=agmax(p)
adopting BEloss for loss
loss=BEloss(y,y′)
When the basic model is trained, the loss feedback model parameters are updated, and the model training is completed after convergence;
in step S32, the process of obtaining the data enhancement model is a process of modifying a transform structure, that is, based on the principle of lattice-lstm (note: grid long and short memory time series network, a deep learning network structure), on the basis of the principle of lattice (note: a network structure that expands and connects words in sentences), the original words are expanded by lattice information, then n-gram information and entity knowledge base information are combined to be used as model input, a transform basic structure is adopted on the model structure, and an attribute modification for vocabulary enhancement is designed in the transform to better utilize the enhanced vocabulary information, thereby labeling the named entity.
In the embodiment, the specific implementation manner is that, firstly, data input is performed,
input=[sinput,lstart,lend]
the input is coded by embedding
Oemb=Embedding(sinput)+PosEmbeding(lstart,lend)
To obtain Oemb=[e1,e2...ed]K x d output matrix k is the length of the sentence, d is eThe length of a mbedding vec;
the conventional transformer structure is, wherein eq,ek,evAre each OembEmbedding result of middle pair with certain span
[Q,K,V]=[eq,ek,ev]
Figure RE-GDA0003310946490000091
Att(A,V)=softamx(A)V
In this embodiment, first the relative position is calculated from the span position in the input,
Figure RE-GDA0003310946490000092
is the distance of the starting positions of the ith and jth span,
Figure RE-GDA0003310946490000093
is the distance between the real position of the ith span and the end position of the jth span to obtain 4 matrices [ d ]ss,dst,dtt,dts]:
Then fusing the relative positions of the span of the two positions
Figure RE-GDA0003310946490000094
And in addition to calculating the content-content attribute on the basis of the original self-attribute, the content-location attribute is calculated
Figure RE-GDA0003310946490000095
Wherein the content of the first and second substances,
Figure RE-GDA0003310946490000096
is the content versus content and content versus location, μTEjWk,E+vTRijWk,RAre their bias terms
The latter is similar to self-attack
Att(A,V)=softamx(A*)V
Then passing through a conventional transformer module:
Otemp=Norm(Att(A,V)+Oemb)
y~=ngmax(softmax(Norm(Otemp+NN(Otemp))))
obtaining loss of the obtained predicted label and the real label, returning back model adjusting parameters, and obtaining a model after convergence
loss=BEloss(y,y′)。
In the embodiment shown in fig. 1, step S4 is to input the text to be labeled into the convergence model to obtain entity naming information based on the enhanced information; the step of inputting the text to be labeled into the convergence model to obtain the entity naming information based on the enhanced information comprises the following steps:
s41: inputting a text to be labeled into the convergence model so that the labeled text generates basic data through the basic model;
s42: enhancing the basic data through the data enhancement layer to form data enhancement information;
s43: and acquiring entity naming information aiming at the text to be labeled based on the data enhancement information.
Specifically, the model for generating the basic information according to the training model based on the step S41 is, in the conventional technology, the entity naming information is generated directly through the basic model, but the accuracy is not high, so the embodiment further includes a step S42, in which the data enhancement model improved in the step S32 is used to perform data enhancement processing on the basic data, that is, the data enhancement model is modified by attention, so that the enhanced vocabulary information can be better utilized, and thus the named entity is labeled, thereby enhancing the accuracy and accuracy of entity naming labeling;
in step S43, the entity naming information is label information in this embodiment, that is, the convergent model automatically generates a predictive label for the text to be labeled, and the precision of the predictive label is almost accurate, so that more prior information is helpful for the precision of naming entity labeling based on the vocabulary information, n-gram information, and entity information of the knowledge base, and the enhancement information is developed and utilized on the basis that only vocabulary is utilized in the conventional NER task, thereby avoiding the defects that only the adopted dynamic lattice structures cannot be parallel and cannot be transplanted to other non-time-series network results when lattice enhancement information is utilized.
The named entity recognition method provided by the invention comprises the steps of firstly packaging the acquired specification sentences to form a data set, then traversing and acquiring the data set to form sample data, carrying out entity splicing processing on the sample data to form standard data, and then inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP; and then inputting the text to be labeled into the convergence model to acquire entity naming information based on the enhanced information, combining vocabulary information, n-gram information and entity information of a knowledge base when the attention mechanism of the classical model is modified, so that more prior information is helpful for naming entity labeling precision, and expanding and utilizing the enhanced information on the basis that the classical model only utilizes the vocabulary, thereby avoiding the defects that only dynamic lattice structures which can be adopted when lattice enhanced information is utilized cannot be paralleled and cannot be transplanted to other non-time sequence network results.
As described above, in the embodiment shown in fig. 1, the named entity identification method provided by the present invention has the following advantages: performing word segmentation processing on code data to obtain corresponding positions of segmented words and segmented words, performing secondary segmentation on the segmented words to obtain corresponding positions of phrases and the phrases, and uploading the corresponding positions of the segmented words and the corresponding positions of the phrases to a knowledge base to form knowledge data, so that complete phrase-semantic mapping can be formed in the data only, and a foundation is laid for accurate labeling; selecting a guess meaning from a first meaning, a second meaning and a third meaning as a sample meaning of the sample data according to a semantic harmony algorithm based on a first-order guess meaning and a second-order guess meaning, sequentially using adjacent sample data and next sample data as the sample data to obtain the sample semantics of the adjacent sample data and the next sample data, namely determining the sample semantics according to the semantics of the data adjacent to the sample data, and repeating circularly, thereby accurately determining the sample semantics of each sample data; calculating the original self-attention based on the fusion matrix, calculating the attention mechanism of the content-content based on the original self-attention, simultaneously calculating the attention mechanism of the content-location to form a data enhancement layer, and fusing and unifying the data enhancement layer and the basic model to form a data enhancement model, thereby improving the annotation precision of the overall convergence model.
As shown in fig. 2, the present invention provides a named entity recognition apparatus 100, which can be installed in an electronic device. According to the implemented functions, the named entity recognition apparatus 100 may include a data packing unit 101, an entity splicing unit 102, a model modification unit 103, and an entity naming unit 104. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
a data packing unit 101 for packing the acquired specification sentences to form a data set;
the entity splicing unit 102 is configured to traverse the data set to form sample data, and perform entity splicing processing on the sample data to form standard data;
a model modification unit 103, configured to input the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is obtained by performing attention mechanism transformation on a classical model based on NLP;
and the entity naming unit 104 is used for inputting the text to be labeled into the convergence model so as to obtain entity naming information based on the enhanced information.
The step of packing the obtained specification statements by the data packing unit 101 to form a data set includes:
obtaining a sample statement;
extracting keywords from the sample sentence to obtain keywords;
acquiring the meaning of the keyword pre-labeled to the keyword;
mapping the keyword meanings with the sample sentences to obtain specification sentences;
packing the specification sentences to form a sentence packet;
and performing data conversion on the statement packet to form a data set.
Specifically, the sample data may be a series of commonly used sentences or phrases, in this embodiment, the city construction system is taken as an example, and the sample sentences may be sentences related to city construction, smart cities, and smart spaces;
the process of extracting keywords from the sample data obtains the keywords in the sample sentence, for example, if the sample sentence is "the sun is not hotter than the sun in the first building today", the time keyword "today", the location keyword "first building" and the form keyword "hotter", and the keywords are given the meaning of their representation, and the specific labeling manner is not limited.
The entity splicing unit 102 is configured to traverse the acquired data set to form sample data, and perform entity splicing processing on the sample data to form standard data; wherein, the step of obtaining the data set to form sample data in a traversal way comprises:
performing traversal reading on the data set to obtain original data;
performing code compilation on the original data to form code data;
performing word segmentation processing on the code data to acquire a word segmentation and a corresponding position of the word segmentation;
carrying out secondary segmentation on the word segmentation to obtain a word group and a corresponding position of the word group;
uploading the word segmentation, the corresponding position of the word group and the corresponding position of the word group to a knowledge base to form knowledge data;
numbering the knowledge data to form sample data;
carrying out entity splicing processing on the sample data to form standard data, wherein the entity splicing processing comprises the following steps:
calling a keyword meaning corresponding to the sample data according to the sample data;
performing extended splicing on the sample data based on the keyword meaning to form guess meaning; wherein the guessing meaning includes a first meaning, a second meaning, and a third meaning;
calling the keyword meanings of the adjacent sample data of the sample data to form first-order keyword meanings, and performing expansion splicing on the first-order keyword meanings to form first-order guess meanings; wherein the first order guess meaning includes a first order first meaning, a second order meaning, and a third order meaning;
acquiring next sample data of the adjacent sample data to form a second-order keyword meaning, and performing expansion splicing on the second-order keyword meaning to form a second-order guess meaning; wherein the second order guessing meaning comprises a second order first meaning, a second order second meaning, and a second order third meaning;
selecting a guess meaning from the first meaning, the second meaning and the third meaning based on the first order guess meaning and the second order guess meaning according to a semantic harmony algorithm as a sample semantic meaning of the sample data;
and splicing the sample semantics in the sample data to finish entity splicing processing to form standard data.
In this way, the standard data is formed, and it should be noted that, after one sample data generates the standard data, the adjacent sample data and the next sample data are sequentially used as the sample data to obtain the sample semantics of the adjacent sample data and the next sample data.
A model modification unit 103 for inputting the standard data into the data enhancement model to obtain a convergence model; the data enhancement model is obtained by performing attention mechanism transformation on a classical model based on NLP; the step of inputting the standard data into a data enhancement model to obtain a convergence model, comprising:
forming a basic model by adopting an NLP classical model structure; wherein the basic model is a transformer structure;
modifying the attention mechanism of the base model to obtain a data enhancement model;
inputting the standard data into the data enhancement model so that the data enhancement model obtains a prediction label according to knowledge data in the standard data and calculates lost data according to the preset label and the sample semantics;
and transmitting the lost data back to the data enhancement model to adjust parameters of the data enhancement model according to the lost data until the data enhancement model converges to obtain a convergence model.
Wherein modifying the attention mechanism of the base model to obtain a data enhancement model comprises:
acquiring the relative position between two input spans according to the input apan position of the basic model;
obtaining a relative matrix according to the relative position;
fusing the relative positions of the two input spans based on the relative matrix to obtain a fusion matrix;
calculating the original self-attention based on the fusion matrix;
computing a content-content attention mechanism based on the original self-attention and concurrently computing a content-location attention mechanism to form a data enhancement layer;
and fusing and unifying the data enhancement layer and the base model to form a data enhancement model.
The entity naming unit 104 is used for inputting the text to be labeled into the convergence model to obtain entity naming information based on the enhanced information; the step of inputting the text to be labeled into the convergence model to obtain entity naming information based on the enhanced information comprises the following steps:
inputting a text to be labeled into the convergence model so that the labeled text generates basic data through the basic model;
enhancing the basic data through the data enhancement layer to form data enhancement information;
and acquiring entity naming information aiming at the text to be labeled based on the data enhancement information.
Specifically, in the conventional technology, the entity naming information is generated directly through the basic model, but the accuracy is not high, so the embodiment further includes a step of performing enhancement processing on the basic data through the data enhancement layer to form data enhancement information, and the step is to perform data enhancement processing on the basic data through an improved data enhancement model, that is, the data enhancement model is subjected to attention transformation, so that the enhanced vocabulary information can be better utilized, and thus the named entity is labeled, thereby enhancing the accuracy and accuracy of entity naming labeling.
Moreover, the entity naming information is label information in the embodiment, that is, the convergence model automatically generates a prediction label for the text to be labeled, and the precision of the prediction label is almost accurate, so that more prior information is helpful for the precision of naming entity labeling based on vocabulary information, n-gram information and entity information of a knowledge base, and enhanced information is developed and utilized on the basis that only vocabulary is utilized in the traditional NER task, thereby avoiding the defects that only adopted dynamic lattice structures cannot be parallel and cannot be transplanted to other non-time sequence network results when lattice enhanced information is utilized.
As described above, in the named entity recognition apparatus 100 provided by the present invention, firstly, the data packing unit 101 packs the obtained specification sentences to form a data set, then the entity splicing unit 102 traverses the data set to form sample data, and performs entity splicing processing on the sample data to form standard data, and then the model modification unit 103 inputs the standard data into the data enhancement model to obtain the convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP; and then the text to be labeled is input into the convergence model through the entity naming unit 104 to obtain entity naming information based on the enhanced information, and when the attention mechanism transformation is carried out on the classical model, vocabulary information, n-gram information and entity information of a knowledge base are combined, so that more prior information is beneficial to the precision of naming entity labeling, and the enhanced information is expanded and utilized on the basis that the classical model only utilizes the vocabulary, thereby avoiding the defects that only adopted dynamic lattice structures cannot be parallel and cannot be transplanted to other non-time sequence network results when lattice enhanced information is utilized.
The named entity recognition device provided by the invention has the following advantages: performing word segmentation processing on code data to obtain corresponding positions of segmented words and segmented words, performing secondary segmentation on the segmented words to obtain corresponding positions of phrases and the phrases, and uploading the corresponding positions of the segmented words and the corresponding positions of the phrases to a knowledge base to form knowledge data, so that complete phrase-semantic mapping can be formed in the data only, and a foundation is laid for accurate labeling; selecting a guess meaning from a first meaning, a second meaning and a third meaning as a sample meaning of the sample data according to a semantic harmony algorithm based on a first-order guess meaning and a second-order guess meaning, sequentially using adjacent sample data and next sample data as the sample data to obtain the sample semantics of the adjacent sample data and the next sample data, namely determining the sample semantics according to the semantics of the data adjacent to the sample data, and repeating circularly, thereby accurately determining the sample semantics of each sample data; calculating the original self-attention based on the fusion matrix, calculating the attention mechanism of the content-content based on the original self-attention, simultaneously calculating the attention mechanism of the content-location to form a data enhancement layer, and fusing and unifying the data enhancement layer and the basic model to form a data enhancement model, thereby improving the annotation precision of the overall convergence model.
As shown in fig. 3, the present invention provides an electronic device 1 of a named entity recognition method.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a named entity recognition program 12, stored in the memory 11 and executable on said processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only for storing application software installed in the electronic device 1 and various types of data, such as code of a named entity recognition program, etc., but also for temporarily storing data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (such as named entity recognition programs) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The named entity recognition program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
packaging the acquired specification sentences to form a data set;
traversing to acquire the data set to form sample data, and performing entity splicing processing on the sample data to form standard data;
inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP;
and inputting the text to be marked into the convergence model to acquire entity naming information based on the enhanced information.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again. It is emphasized that, to further ensure the privacy and security of the named entity identification, the data of the named entity identification is stored in the node of the blockchain in which the server cluster is located.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
An embodiment of the present invention further provides a computer-readable storage medium, where the storage medium may be nonvolatile or volatile, and the storage medium stores a computer program, and when the computer program is executed by a processor, the computer program implements:
packaging the acquired specification sentences to form a data set;
traversing to acquire the data set to form sample data, and performing entity splicing processing on the sample data to form standard data;
inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is a model obtained by performing attention mechanism transformation on a classical model based on NLP;
and inputting the text to be marked into the convergence model to acquire entity naming information based on the enhanced information.
Specifically, the specific implementation method of the computer program when being executed by the processor may refer to the description of the relevant steps in the named entity identification method in the embodiment, which is not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A named entity recognition method, comprising:
packaging the acquired specification sentences to form a data set;
traversing to obtain the data set to form sample data, and performing entity splicing processing on the sample data to form standard data;
inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is obtained by performing attention mechanism transformation on a classical model based on NLP;
and inputting the text to be marked into the convergence model to acquire entity naming information based on the enhanced information.
2. The named entity recognition method of claim 1, wherein said packaging the retrieved specification statements to form a data set comprises:
obtaining a sample statement;
extracting keywords from the sample sentence to obtain keywords;
acquiring the meaning of the keyword for pre-labeling the keyword;
mapping the keyword meanings with the sample sentences to obtain specification sentences;
packing the specification sentences to form a sentence packet;
and performing data conversion on the statement packet to form a data set.
3. The named entity recognition method of claim 2, wherein said traversing obtains the data set to form sample data, comprising:
performing traversal reading on the data set to obtain original data;
performing code compilation on the original data to form code data;
performing word segmentation processing on the code data to obtain word segments and corresponding positions of the word segments;
performing secondary segmentation on the word segmentation to obtain a corresponding position of a word group and the word group;
uploading the word segmentation, the corresponding position of the word group and the corresponding position of the word group to a knowledge base to form knowledge data;
numbering the knowledge data to form sample data.
4. The named entity recognition method of claim 3, wherein said performing an entity concatenation process on said sample data to form standard data comprises:
calling a keyword meaning corresponding to the sample data according to the sample data;
performing extended splicing on the sample data based on the keyword meaning to form guess meaning; wherein the guessing meaning includes a first meaning, a second meaning, and a third meaning;
calling the keyword meanings of the adjacent sample data of the sample data to form first-order keyword meanings, and performing expansion splicing on the first-order keyword meanings to form first-order guess meanings;
acquiring next sample data of the adjacent sample data to form a second-order keyword meaning, and performing expansion splicing on the second-order keyword meaning to form a second-order guess meaning;
selecting a guess meaning from the first meaning, the second meaning and the third meaning based on the first order guess meaning and the second order guess meaning according to a semantic harmony algorithm as a sample semantic meaning of the sample data;
and splicing the sample semantics in the sample data to finish entity splicing processing to form standard data.
5. The named entity recognition method of claim 4, wherein said entering the standard data into a data enhancement model to obtain a convergence model comprises:
forming a basic model by adopting an NLP classical model structure;
modifying the attention mechanism of the base model to obtain a data enhancement model;
inputting the standard data into the data enhancement model so that the data enhancement model obtains a prediction label according to knowledge data in the standard data and calculates lost data according to the preset label and the sample semantics;
and transmitting the lost data back to the data enhancement model to adjust parameters of the data enhancement model according to the lost data until the data enhancement model converges to obtain a convergence model.
6. The named entity recognition method of claim 5 wherein said adapting the attention mechanism of the base model to obtain a data enhancement model comprises:
acquiring the relative position between two input spans according to the input apan position of the basic model;
obtaining a relative matrix according to the relative position;
fusing the relative positions of the two input spans based on the relative matrix to obtain a fusion matrix;
calculating the original self-attention based on the fusion matrix;
computing a content-content attention mechanism based on the original self-attention and concurrently computing a content-location attention mechanism to form a data enhancement layer;
and fusing and unifying the data enhancement layer and the base model to form a data enhancement model.
7. The named entity recognition method of claim 6, wherein said entering text to be annotated into said converged model to obtain enhanced information based entity naming information, comprises:
inputting a text to be labeled into the convergence model so that the labeled text generates basic data through the basic model;
performing enhancement processing on the basic data through the data enhancement layer to form data enhancement information;
and acquiring entity naming information aiming at the text to be labeled based on the data enhancement information.
8. An apparatus for named entity recognition, the apparatus comprising:
the data packing unit is used for packing the acquired specification sentences to form a data set;
the entity splicing unit is used for traversing and acquiring the data set to form sample data and carrying out entity splicing processing on the sample data to form standard data;
the model transformation unit is used for inputting the standard data into a data enhancement model to obtain a convergence model; the data enhancement model is obtained by performing attention mechanism transformation on a classical model based on NLP;
and the entity naming unit is used for inputting the text to be labeled into the convergence model so as to obtain entity naming information based on the enhanced information.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the steps of the named entity recognition method according to any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a named entity recognition method as claimed in one of claims 1 to 7.
CN202110738499.6A 2021-06-30 2021-06-30 Named entity recognition method, named entity recognition device, electronic equipment and storage medium Active CN113723102B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110738499.6A CN113723102B (en) 2021-06-30 2021-06-30 Named entity recognition method, named entity recognition device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110738499.6A CN113723102B (en) 2021-06-30 2021-06-30 Named entity recognition method, named entity recognition device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113723102A true CN113723102A (en) 2021-11-30
CN113723102B CN113723102B (en) 2024-04-26

Family

ID=78672945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110738499.6A Active CN113723102B (en) 2021-06-30 2021-06-30 Named entity recognition method, named entity recognition device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113723102B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108536679A (en) * 2018-04-13 2018-09-14 腾讯科技(成都)有限公司 Name entity recognition method, device, equipment and computer readable storage medium
WO2020164267A1 (en) * 2019-02-13 2020-08-20 平安科技(深圳)有限公司 Text classification model construction method and apparatus, and terminal and storage medium
CN111581361A (en) * 2020-04-22 2020-08-25 腾讯科技(深圳)有限公司 Intention identification method and device
CN111738003A (en) * 2020-06-15 2020-10-02 中国科学院计算技术研究所 Named entity recognition model training method, named entity recognition method, and medium
CN112183102A (en) * 2020-10-15 2021-01-05 上海明略人工智能(集团)有限公司 Named entity identification method based on attention mechanism and graph attention network
WO2021043085A1 (en) * 2019-09-04 2021-03-11 平安科技(深圳)有限公司 Method and apparatus for recognizing named entity, computer device, and storage medium
CN112667800A (en) * 2020-12-21 2021-04-16 深圳壹账通智能科技有限公司 Keyword generation method and device, electronic equipment and computer storage medium
CN113010690A (en) * 2021-03-29 2021-06-22 华南理工大学 Method for enhancing entity embedding based on text information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108536679A (en) * 2018-04-13 2018-09-14 腾讯科技(成都)有限公司 Name entity recognition method, device, equipment and computer readable storage medium
WO2020164267A1 (en) * 2019-02-13 2020-08-20 平安科技(深圳)有限公司 Text classification model construction method and apparatus, and terminal and storage medium
WO2021043085A1 (en) * 2019-09-04 2021-03-11 平安科技(深圳)有限公司 Method and apparatus for recognizing named entity, computer device, and storage medium
CN111581361A (en) * 2020-04-22 2020-08-25 腾讯科技(深圳)有限公司 Intention identification method and device
CN111738003A (en) * 2020-06-15 2020-10-02 中国科学院计算技术研究所 Named entity recognition model training method, named entity recognition method, and medium
CN112183102A (en) * 2020-10-15 2021-01-05 上海明略人工智能(集团)有限公司 Named entity identification method based on attention mechanism and graph attention network
CN112667800A (en) * 2020-12-21 2021-04-16 深圳壹账通智能科技有限公司 Keyword generation method and device, electronic equipment and computer storage medium
CN113010690A (en) * 2021-03-29 2021-06-22 华南理工大学 Method for enhancing entity embedding based on text information

Also Published As

Publication number Publication date
CN113723102B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN104361127B (en) The multilingual quick constructive method of question and answer interface based on domain body and template logic
CN112364660B (en) Corpus text processing method, corpus text processing device, computer equipment and storage medium
CN112000805A (en) Text matching method, device, terminal and storage medium based on pre-training model
WO2020010834A1 (en) Faq question and answer library generalization method, apparatus, and device
CN114781402A (en) Method and device for identifying inquiry intention, electronic equipment and readable storage medium
CN113515938B (en) Language model training method, device, equipment and computer readable storage medium
CN113821622B (en) Answer retrieval method and device based on artificial intelligence, electronic equipment and medium
CN113360654B (en) Text classification method, apparatus, electronic device and readable storage medium
CN112257860A (en) Model generation based on model compression
CN113807973A (en) Text error correction method and device, electronic equipment and computer readable storage medium
CN116821373A (en) Map-based prompt recommendation method, device, equipment and medium
CN115238115A (en) Image retrieval method, device and equipment based on Chinese data and storage medium
CN114020892A (en) Answer selection method and device based on artificial intelligence, electronic equipment and medium
CN112667878A (en) Webpage text content extraction method and device, electronic equipment and storage medium
CN112668281A (en) Automatic corpus expansion method, device, equipment and medium based on template
CN110197521B (en) Visual text embedding method based on semantic structure representation
CN114757154B (en) Job generation method, device and equipment based on deep learning and storage medium
CN113723102B (en) Named entity recognition method, named entity recognition device, electronic equipment and storage medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN115346095A (en) Visual question answering method, device, equipment and storage medium
WO2022227196A1 (en) Data analysis method and apparatus, computer device, and storage medium
CN110851572A (en) Session labeling method and device, storage medium and electronic equipment
CN115146064A (en) Intention recognition model optimization method, device, equipment and storage medium
CN114020774A (en) Method, device and equipment for processing multiple rounds of question-answering sentences and storage medium
CN114398902A (en) Chinese semantic extraction method based on artificial intelligence and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant