CN111581970A - Text recognition method, device and storage medium for network context - Google Patents
Text recognition method, device and storage medium for network context Download PDFInfo
- Publication number
- CN111581970A CN111581970A CN202010396183.9A CN202010396183A CN111581970A CN 111581970 A CN111581970 A CN 111581970A CN 202010396183 A CN202010396183 A CN 202010396183A CN 111581970 A CN111581970 A CN 111581970A
- Authority
- CN
- China
- Prior art keywords
- text
- window
- word
- short
- context
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/10—Text processing
- G06F40/12—Use of codes for handling textual entities
- G06F40/126—Character encoding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Probability & Statistics with Applications (AREA)
- Life Sciences & Earth Sciences (AREA)
- Machine Translation (AREA)
Abstract
The invention provides a text recognition method, a device and a storage medium of a network context, wherein the method comprises the following steps: constructing a style semantic model based on a long text window, and constructing a component-level semantic model based on a short text window; training a corpus of the network context based on a style semantic model vector model and a radical semantic model to obtain a Chinese word vector model of the network context; and recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result. The invention uses two different windows when dividing words, the long window is used for extracting semantic information of a networking style, the short window of the text is used for extracting semantic features of different fine granularities, and the two are combined in a training stage to obtain more accurate word vector expression so as to improve the text recognition rate of a network context.
Description
Technical Field
The invention relates to the technical field of text data processing, in particular to a text recognition method, a text recognition device and a storage medium for a network context.
Background
Text vectorization representation is always an important direction for the research of computer technology and artificial intelligence technology, and is also a significant challenge in natural language analysis processing. The quality of the text vectorization representation directly influences the performance of the subsequent natural language analysis model. The vectorization representation of the text adopts a one-hot (one-hot) model at the earliest time, gradually evolves into a Bag of Words (Bag of Words), and the representation methods have simple and clear ideas, better solve the representation problem of the text in a computer, but do not consider the semantic correlation between word contexts in the language and the time sequence characteristics of the language, split the coherent semantic information between the word contexts, and have the serious problems of vector sparsity and dimension disaster. With the development of the deep neural network technology, researchers provide a method for representing a distributed word vector, each word is mapped to a low-dimensional real number space, the relevance among the words is mined while the dimension is reduced, and the word vector has richer semantic information. Mikolov et al propose a classical distributed Word vector model Word2Vec, which includes two models, CBOW (Continuous Bag of Words) and Skip-gram, the former predicting Words through context, and the latter predicting context information through Words. Word2vec learns dense Word vectors by using co-occurrence information between the context and the words, and the model is simple and practical and is widely applied to various natural language processing tasks.
The main drawbacks of the prior art are as follows:
for social media such as microblogs and posts, unlike a formal and official description of news texts, social text contents usually have spoken and networked styles, for example, "a home held by an intelligent robot is really a wrong one". In a social language context, words may be assigned new meanings or new network terms may be generated directly. Word2vec word vectors obtained through training of a standard corpus (such as encyclopedia and dictionary) cannot accurately express network meanings of words, and great influence is generated on analysis tasks of social texts.
The CBOW model is designed in an English language system, but is also suitable for expression of Chinese word vectors. However, compared with english, the semantic composition of chinese is more complex, chinese words and phrases are formed by chinese characters, and the semantics of chinese characters are generally related to the meanings of character-forming components thereof, and if a CBOW model is directly used to learn chinese word vectors, the latent semantic information of chinese characters is ignored, and the generalization capability of the obtained word vector model is weak. And the recognition of the text is not necessarily effective, the convergence speed of the model is slow during training, and the like, and a new Chinese word vector model is urgently needed to solve one or more technical defects.
Disclosure of Invention
The present invention proposes the following technical solutions to address one or more technical defects in the prior art.
A method of text recognition of a network context, the method comprising:
a modeling step, namely constructing a style semantic model based on a text long window and constructing a component-level semantic model based on a text short window;
training, namely training a corpus of the network context based on a style semantic model vector model and a radical semantic model to obtain a Chinese word vector model of the network context;
and a step of recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result.
Furthermore, the corpus sequence obtained by segmenting any corpus s in the corpus is s ═ w1,…wt-1,wt,wt+1,…wNIn which wtSetting w for the t-th word in the sequence after word segmentationtT is 1, … N, and N is the total word number in the corpus sequence; with the target word wtConstructing a text window for the center, and defining a text short window as follows:
wherein d issRepresenting words in a short window of text to a target word wtLet the distance threshold of the text short window be theta, windowsRepresenting the target word w by the neighborhoodtA set of words consisting of the context of (a);
define a text long window as
Wherein d islRepresenting words in a long window of text to a target word wtThe minimum value of the distance (c) is theta +1, the maximum value is β -N, windowlRepresenting target words w by distancetContext components that are far away and do not include content in the short window of text.
Furthermore, the process of constructing the style semantic model based on the long text window is as follows: window for long textlComputing hidden layer vectors as input to CBOW
In the formula (I), the compound is shown in the specification,context w representing a target word within a long window of textt+jThe corresponding code vector, β, represents the target word w in the long window of texttAnd context wt+jThe total length of the text length window is 2 β.
Furthermore, the process of constructing the radical-level semantic model based on the text short window is as follows:
Converting the radical r into Chinese character r with corresponding semantic meaning by character escape dictionary*Obtaining the word sequence x after the short text and the escape of the radicals,
and (3) carrying out weighted fusion coding on the Chinese characters and the radicals corresponding to the words by adopting a self-attention mechanism, wherein the calculation formula of the self-attention weight alpha is as follows:
αi=softmax(f(xTxi))
wherein x isiRepresenting the short text corresponding to the ith word in the short text window and the character sequence after the escape of the radical, i ∈ { t + -ds|1<ds≤θ},xTIs xiThe similarity calculation function f adopts a dot product form;
the code vector for each word within the short window of text is:
vx=∑iαivi
wherein v isiThe coding vector represents the ith word in the word sequence corresponding to the word x in the short window of the text;
coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
In the formula, theta represents a target word w in a short window of texttAnd context wt+jThe total length of the text short window is 2 theta,and representing the coding vector corresponding to the context of the t-th target word in the short text window.
Further, the training step operates as:
randomly generating a m-dimensional vector v for each word after word segmentation in the corpuswAnd calculating a log-likelihood function of the corpus sequence s:
wherein, L (w)t) Is composed of a target word wtThe log-likelihood function with the context conditional probability is calculated,
and the target word wtThe conditional probability of the corresponding context can be calculated by the softmax function,
wherein the content of the first and second substances,denotes the transpose of the k-th hidden layer vector, k 1,2,the output vector of the hidden layer for the style semantic model,for the output vector of the radical level semantic model hidden layer,a word vector representing the target word,the word vector representing the context is trained and optimized by an objective function L(s), and model parameters are updated to obtain a final Chinese word vector model vw∈Rm。
The invention also proposes a device for text recognition of a network context, comprising:
the modeling unit is used for constructing a style semantic model based on the text long window and constructing a component-level semantic model based on the text short window;
the training unit is used for training a corpus of the network context based on the style semantic model vector model and the radical semantic model to obtain a Chinese word vector model of the network context;
and the recognition unit is used for recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result.
Furthermore, the corpus sequence obtained by segmenting any corpus s in the corpus is s ═ w1,…wt-1,wt,wt+1,…wNIn which wtSetting w for the t-th word in the sequence after word segmentationtT is 1, … N, and N is the total word number in the corpus sequence; with the target word wtConstructing a text window for the center, and defining a text short window as follows:
wherein d issRepresenting words in a short window of text to a target word wtLet the distance threshold of the text short window be theta, windowsRepresenting the target word w by the neighborhoodtA set of words consisting of the context of (a);
define a text long window as
Wherein d islRepresenting words in a long window of text to a target word wtThe minimum value of the distance (c) is theta +1, the maximum value is β -N, windowlRepresenting target words w by distancetContext components that are far away and do not include content in the short window of text.
Furthermore, the process of constructing the style semantic model based on the long text window is as follows: window for long textlComputing hidden layer vectors as input to CBOW
In the formula (I), the compound is shown in the specification,context w representing a target word within a long window of textt+jThe corresponding code vector, β, represents the target word w in the long window of texttAnd context wt+jThe total length of the text length window is 2 β.
Furthermore, the process of constructing the radical-level semantic model based on the text short window is as follows:
Converting the radical r into Chinese character r with corresponding semantic meaning by character escape dictionary*Obtaining the word sequence x after the short text and the escape of the radicals,
and (3) carrying out weighted fusion coding on the Chinese characters and the radicals corresponding to the words by adopting a self-attention mechanism, wherein the calculation formula of the self-attention weight alpha is as follows:
αi=softmax(f(xTxi))
wherein x isiRepresenting the short text corresponding to the ith word in the short text window and the character sequence after the escape of the radical, i ∈ { t + -ds|1<ds≤θ},xTIs xiThe similarity calculation function f adopts a dot product form;
the code vector for each word within the short window of text is:
vx=∑iαivi
wherein v isiThe coding vector represents the ith word in the word sequence corresponding to the word x in the short window of the text;
coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
In the formula, theta represents a target word w in a short window of texttAnd context wt+jThe total length of the text short window is 2 theta,and representing the coding vector corresponding to the context of the t-th target word in the short text window.
Further, the training unit performs the operations of:
randomly generating a m-dimensional vector v for each word after word segmentation in the corpuswAnd calculating a log-likelihood function of the corpus sequence s:
wherein, L (w)t) Is composed of a target word wtThe log-likelihood function with the context conditional probability is calculated,
and the target word wtThe conditional probability of the corresponding context can be calculated by the softmax function,
wherein the content of the first and second substances,denotes the transpose of the k-th hidden layer vector, k 1,2,the output vector of the hidden layer for the style semantic model,for the output vector of the radical level semantic model hidden layer,a word vector representing the target word,the word vector representing the context is trained and optimized by an objective function L(s), and model parameters are updated to obtain a final Chinese word vector model vw∈Rm. The invention also proposes a computer-readable storage medium having stored thereon computer program code which, when executed by a computer, performs any of the methods described above.
The invention has the technical effects that: the invention discloses a text recognition method of a network context, which comprises the following steps: a modeling step, namely constructing a style semantic model based on a text long window and constructing a component-level semantic model based on a text short window; training, namely training a corpus of the network context based on a style semantic model vector model and a radical semantic model to obtain a Chinese word vector model of the network context; and a step of recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result. The invention aims to solve the problem of the accuracy of the text recognition of the network context, creatively provides that two different windows, namely a long text window and a short text window, are used during word segmentation, the long meaning of the long window is longer than the short window relative to the short window, the long window is used for extracting semantic information of a networked style, the short text window is used for extracting semantic features with different fine granularities, the two are combined in a training stage to obtain more accurate word vector expression so as to improve the text recognition rate of the network context, in order to learn the text style, the style trend of the linguistic data in a larger range needs to be focused by a model rather than the context near a target word, so that the long text window is adopted, a radical is introduced to enhance the word vector semantic information, thereby extracting multi-level semantic features so as to improve the accuracy of text classification, according to the method, the social linguistic data is trained and the objective function L(s) is optimized, so that the model training speed is accelerated, the training efficiency is improved, and during training, the style characteristics of the text and the characteristics of the radicals are combined, and a method for transferring the radicals is established, so that the recognition rate of the text is improved.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings.
Fig. 1 is a flowchart of a text recognition method for a network context according to an embodiment of the present invention.
Fig. 2 is a block diagram of a network context text recognition apparatus according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows a text recognition method of a network context according to the present invention, which includes:
a modeling step S101, namely constructing a style semantic model based on a text long window and constructing a component-level semantic model based on a text short window;
a training step S102, training a corpus of the network context based on a style semantic model vector model and a radical semantic model to obtain a Chinese word vector model of the network context;
and a recognition step S103, recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result.
The invention aims to solve the problem of accuracy of text recognition of a network context, and creatively provides that two different windows, namely a long text window and a short text window, are used during word segmentation, the long meaning of the long window is longer than that of the short window, the long window is used for extracting semantic information of a networked style, the short text window is used for extracting semantic features of different fine granularities, and the two are combined in a training stage to obtain more accurate word vector expression so as to improve the text recognition rate of the network context, which is one of important invention points of the invention.
In one embodiment, the corpus sequence obtained by segmenting any corpus s in the corpus is s ═ w1,…wt-1,wt,wt+1,…wNThe corpus comprises at least one corpus, wherein wtSetting w for the t-th word in the sequence after word segmentationtT is 1, … N, and N is the total word number in the corpus sequence; with the target word wtConstructing a text window for the center, and defining a text short window as follows:
wherein d issRepresenting words in a short window of text to a target word wtThe distance of (2) is set as a distance threshold value of a text short window theta (the experimental value is between 1 and 3.), and windowsRepresenting the target word w by the neighborhoodtA set of words consisting of the context of (a);
define a text long window as
Wherein d islRepresenting words in a long window of text to a target word wtThe minimum value of the distance is theta +1, the maximum value is β (the experimental value is between 4 and 7), β is not more than N, and windowlRepresenting target words w by distancetContext components that are far away and do not include content in the short window of text.
For example, if θ is 2 and β is 6, the phrase "today i go with friends to the park to view beautiful scenery with cherry blossom" is given as the target word wtFor "park", the short window of text contains 2 words "together", "going to", "watching", "cherry blossom", and the long window of text contains 8 words "today", "I", "and", "friend", "hold", "beautiful", "scenery".
In the training process, the target words to be predicted are all words in the corpus S, the sliding step length of the long and short text windows is 1, and each word of each corpus sequence in the corpus is traversed.
In one embodiment, the process of constructing the style semantic model based on the text long window is as follows: window for long textlComputing hidden layer vectors as input to CBOW
In the formula (I), the compound is shown in the specification,context w representing a target word within a long window of textt+jThe corresponding code vector, β, represents the target word w in the long window of texttAnd context wt+jThe total length of the text length window is 2 β.
In order to obtain Chinese word vectors suitable for the social context style, the CBOW model is improved, namely the style semantic model is based on CBOW, the context window range is expanded, text contents near target words are ignored, the understanding capability of the model on the whole text style is improved while context semantic information is weakened, and the Chinese word vectors suitable for the network context are generated. The invention adopts the long text window, so that the text style is learned, the style trend of the corpus in a wider range is concerned, and the text is not focused on the context near the target word, which is another important invention point of the invention.
In one embodiment, in order to extract multi-level semantic features, the invention introduces radical components to enhance word vector semantic information. The radical level semantic model adopts a text short windowsAnd weighting and fusing semantic information of the Chinese characters and the radicals in the text window by using a self-attention (self-attention) mechanism, and calculating word vectors by using a CBOW model. The process of constructing the radical level semantic model based on the text short window comprises the following steps:
Converting the radical r into Chinese character r with corresponding semantic meaning by character escape dictionary*Obtaining the word sequence x after the short text and the escape of the radicals,
and (3) carrying out weighted fusion coding on the Chinese characters and the radicals corresponding to the words by adopting a self-attention mechanism, wherein the calculation formula of the self-attention weight alpha is as follows:
αi=softmax(f(xTxi))
wherein x isiRepresenting the short text corresponding to the ith word in the short text window and the character sequence after the escape of the radical, i ∈ { t + -ds|1<ds≤θ},xTIs xiThe similarity calculation function f adopts a dot product form;
the code vector for each word within the short window of text is:
vx=∑iαivi
wherein v isiThe coding vector represents the ith word in the word sequence corresponding to the word x in the short window of the text;
coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
In the formula, theta represents a target word w in a short window of texttAnd context wt+jThe total length of the text short window is 2 theta,and representing the coding vector corresponding to the context of the t-th target word in the short text window.
The following describes the component level semantic model with the text short window distance threshold θ being 1.
(1) Words w in a short window of textt-1And wt+1Dividing into Chinese characters to obtain short text character sequence c ═ { c ═ ct-1,ct+1}. For example, the word "disease" is divided into two words, disease "and" condition ".
(2) Extracting short text word sequencesc radical of each Chinese charactert-1、rt+1. For example, the radicals corresponding to "illness" and "emotion" are "" and "".
(3) In order to clarify semantic information contained in the radical, the radical r is converted into Chinese character r with corresponding semantic by character escape dictionary*. For example, "disease" and "heart" are obtained after the escape of "" and "". The character escape dictionary is shown in table 1 below.
TABLE 1 radical character escape table
(5) And (3) adopting a self-attention (self-attention) mechanism to carry out weighted fusion coding on the Chinese characters and the radicals corresponding to the words, mining the potential semantics of the radicals and enhancing the semantic information of the character sequence. The calculation formula of the self-attention weight alpha is as follows
αi=softmax(f(xTxi))
Wherein xiRepresents the word sequence corresponding to the ith word in the short text window, i ═ t ± ds|1<ds≤θ},xTIs xiThe similarity calculation function f is in the form of dot product.
The code vector of each word in the short text window is vx=∑iαivi
Coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
Where θ is the distance between the word and the target word in the short window of text.
In the character forming part, the radical usually has clear semantic information, which is helpful for understanding the meaning of the character, therefore, the invention introduces the radical to enhance the word vector semantic information, thereby extracting multi-level semantic features to improve the accuracy of text classification, which is another important invention point of the invention.
In one embodiment, the operation of the training step is:
randomly generating a m-dimensional vector v for each word after word segmentation in the corpuswAnd calculating a log-likelihood function of the corpus sequence s:
wherein, L (w)t) Is composed of a target word wtThe log-likelihood function with the context conditional probability is calculated,
and the target word wtThe conditional probability of the corresponding context can be calculated by the softmax function,
wherein the content of the first and second substances,denotes the transpose of the k-th hidden layer vector, k 1,2,the output vector of the hidden layer for the style semantic model,for the output vector of the radical level semantic model hidden layer,a word vector representing the target word,the word vector representing the context is trained and optimized by an objective function L(s), and model parameters are updated to obtain a final Chinese word vector model vw∈RmI.e. m-dimensional word vectors corresponding to all words w in the corpus.
The invention accelerates the model training speed and improves the training efficiency by training the social linguistic data and optimizing the objective function L(s), and combines the style characteristics of the text and the characteristics of the radical during training and establishes a method for the radical escape, thereby improving the recognition rate of the text, which is another important invention point of the invention.
Fig. 2 shows a network context text recognition apparatus according to the present invention, which includes:
the modeling unit 201 is used for constructing a style semantic model based on a text long window and constructing a component-level semantic model based on a text short window;
the training unit 202 is used for training a corpus of the network context based on the style semantic model vector model and the radical semantic model to obtain a Chinese word vector model of the network context;
and the recognition unit 203 recognizes the input text of the network context by using the Chinese word vector model of the network context and outputs a recognition result.
The invention aims to solve the problem of accuracy of text recognition of a network context, and creatively provides that two different windows, namely a long text window and a short text window, are used during word segmentation, the long meaning of the long window is longer than that of the short window, the long window is used for extracting semantic information of a networked style, the short text window is used for extracting semantic features of different fine granularities, and the two are combined in a training stage to obtain more accurate word vector expression so as to improve the text recognition rate of the network context, which is one of important invention points of the invention.
In one embodiment, the corpus sequence obtained by segmenting any corpus s in the corpus is s ═ w1,…wt-1,wt,wt+1,…wNThe corpus comprises at least one corpus, wherein wtSetting w for the t-th word in the sequence after word segmentationtT is 1, … N, and N is the total word number in the corpus sequence; with the target word wtConstructing a text window for the center, and defining a text short window as follows:
wherein d issRepresenting words in a short window of text to a target word wtThe distance of (2) is set as a distance threshold value of a text short window theta (the experimental value is between 1 and 3.), and windowsRepresenting the target word w by the neighborhoodtA set of words consisting of the context of (a);
define a text long window as
Wherein d islRepresenting words in a long window of text to a target word wtThe minimum value of the distance is theta +1, the maximum value is β (the experimental value is between 4 and 7), β is not more than N, and windowlRepresenting target words w by distancetContext components that are far away and do not include content in the short window of text.
For example, if θ is 2 and β is 6, the phrase "today i go with friends to the park to view beautiful scenery with cherry blossom" is given as the target word wtFor "park", the text contains 2 words "together", "going to", "watching",cherry blossom ", contains 8 words" today "," me "," and "," friend "," hold "," of "," beautiful "," scene "in the text long window.
In the training process, the target words to be predicted are all words in the corpus S, the sliding step length of the long and short text windows is 1, and each word of each corpus sequence in the corpus is traversed.
In one embodiment, the process of constructing the style semantic model based on the text long window is as follows: window for long textlComputing hidden layer vectors as input to CBOW
In the formula (I), the compound is shown in the specification,context w representing a target word within a long window of textt+jThe corresponding code vector, β, represents the target word w in the long window of texttAnd context wt+jThe total length of the text length window is 2 β.
In order to obtain Chinese word vectors suitable for the social context style, the CBOW model is improved, namely the style semantic model is based on CBOW, the context window range is expanded, text contents near target words are ignored, the understanding capability of the model on the whole text style is improved while context semantic information is weakened, and the Chinese word vectors suitable for the network context are generated. The invention adopts the long text window, so that the text style is learned, the style trend of the corpus in a wider range is concerned, and the text is not focused on the context near the target word, which is another important invention point of the invention.
In one embodiment, in order to extract multi-level semantic features, the invention introduces radical components to enhance word vector semantic information. The radical level semantic model adopts text shortWindow windowsAnd weighting and fusing semantic information of the Chinese characters and the radicals in the text window by using a self-attention (self-attention) mechanism, and calculating word vectors by using a CBOW model. The process of constructing the radical level semantic model based on the text short window comprises the following steps:
Converting the radical r into Chinese character r with corresponding semantic meaning by character escape dictionary*Obtaining the word sequence x after the short text and the escape of the radicals,
and (3) carrying out weighted fusion coding on the Chinese characters and the radicals corresponding to the words by adopting a self-attention mechanism, wherein the calculation formula of the self-attention weight alpha is as follows:
αi=softmax(f(xTxi))
wherein x isiRepresenting the short text corresponding to the ith word in the short text window and the character sequence after the escape of the radical, i ∈ { t + -ds|1<ds≤θ},xTIs xiThe similarity calculation function f adopts a dot product form;
the code vector for each word within the short window of text is:
vx=∑iαivi
wherein v isiRepresenting text within short windowsThe coding vector of the ith word in the word sequence corresponding to the word x;
coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
In the formula, theta represents a target word w in a short window of texttAnd context wt+jThe total length of the text short window is 2 theta,and representing the coding vector corresponding to the context of the t-th target word in the short text window.
The following describes the component level semantic model with the text short window distance threshold θ being 1.
(1) Words w in a short window of textt-1And wt+1Dividing into Chinese characters to obtain short text character sequence c ═ { c ═ ct-1,ct+1}. For example, the word "disease" is divided into two words, disease "and" condition ".
(2) Extracting the radical r of each Chinese character in the short text character sequence ct-1、rt+1. For example, the radicals corresponding to "illness" and "emotion" are "" and "".
(3) In order to clarify semantic information contained in the radical, the radical r is converted into Chinese character r with corresponding semantic by character escape dictionary*. For example, "disease" and "heart" are obtained after the escape of "" and "". The character escape dictionary is shown in table 1 above.
(5) and (3) adopting a self-attention (self-attention) mechanism to carry out weighted fusion coding on the Chinese characters and the radicals corresponding to the words, mining the potential semantics of the radicals and enhancing the semantic information of the character sequence. The calculation formula of the self-attention weight alpha is as follows
αi=softmax(f(xTxi))
Wherein xiRepresents the word sequence corresponding to the ith word in the short text window, i ═ t ± ds|1<ds≤θ},xTIs xiThe similarity calculation function f is in the form of dot product.
The code vector of each word in the short text window is vx=∑iαivi
Coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
Where θ is the distance between the word and the target word in the short window of text.
In the character forming part, the radical usually has clear semantic information, which is helpful for understanding the meaning of the character, therefore, the invention introduces the radical to enhance the word vector semantic information, thereby extracting multi-level semantic features to improve the accuracy of text classification, which is another important invention point of the invention.
In one embodiment, the training unit performs the operations of:
randomly generating a m-dimensional vector v for each word after word segmentation in the corpuswAnd calculating a log-likelihood function of the corpus sequence s:
wherein, L (w)t) Is composed of a target word wtThe log-likelihood function with the context conditional probability is calculated,
and the target word wtThe conditional probability of the corresponding context can be calculated by the softmax function,
wherein the content of the first and second substances,denotes the transpose of the k-th hidden layer vector, k 1,2,the output vector of the hidden layer for the style semantic model,for the output vector of the radical level semantic model hidden layer,a word vector representing the target word,the word vector representing the context is trained and optimized by an objective function L(s), and model parameters are updated to obtain a final Chinese word vector model vw∈RmI.e. m-dimensional word vectors corresponding to all words w in the corpus.
The invention accelerates the model training speed and improves the training efficiency by training the social linguistic data and optimizing the objective function L(s), and combines the style characteristics of the text and the characteristics of the radical during training and establishes a method for the radical escape, thereby improving the recognition rate of the text, which is another important invention point of the invention.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially implemented or the portions that contribute to the prior art may be embodied in the form of a software product, which may be stored in a storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the apparatuses described in the embodiments or some portions of the embodiments of the present application.
Finally, it should be noted that: although the present invention has been described in detail with reference to the above embodiments, it should be understood by those skilled in the art that: modifications and equivalents may be made thereto without departing from the spirit and scope of the invention and it is intended to cover in the claims the invention as defined in the appended claims.
Claims (11)
1. A method for text recognition of a network context, the method comprising:
a modeling step, namely constructing a style semantic model based on a text long window and constructing a component-level semantic model based on a text short window;
training, namely training a corpus of the network context based on a style semantic model vector model and a radical semantic model to obtain a Chinese word vector model of the network context;
and a step of recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result.
2. The method of claim 1The method is characterized in that the corpus sequence obtained by segmenting any corpus s in the corpus is s ═ w1,…wt-1,wt,wt+1,…wNIn which wtSetting w for the t-th word in the sequence after word segmentationtT is 1, … N, and N is the total word number in the corpus sequence; with the target word wtConstructing a text window for the center, and defining a text short window as follows:
wherein d issRepresenting words in a short window of text to a target word wtLet the distance threshold of the text short window be theta, windowsRepresenting the target word w by the neighborhoodtA set of words consisting of the context of (a);
define a text long window as
Wherein d islRepresenting words in a long window of text to a target word wtThe minimum value of the distance (c) is theta +1, the maximum value is β -N, windowlRepresenting target words w by distancetContext components that are far away and do not include content in the short window of text.
3. The method according to claim 2, wherein the process of constructing the style semantic model based on the text long window is as follows: window for long textlComputing hidden layer vectors as input to CBOW
4. The method according to claim 3, wherein the process of constructing the component-level semantic model based on the text short window is as follows:
Converting the radical r into Chinese character r with corresponding semantic meaning by character escape dictionary*Obtaining the word sequence after the short text and the radical escape
And (3) carrying out weighted fusion coding on the Chinese characters and the radicals corresponding to the words by adopting a self-attention mechanism, wherein the calculation formula of the self-attention weight alpha is as follows:
αi=softmax(f(xTxi))
wherein x isiRepresenting the short text corresponding to the ith word in the short text window and the word sequence after the escape of the radicals, i ∈ { t + -)ds|1<ds≤θ},xTIs xiThe similarity calculation function f adopts a dot product form;
the code vector for each word within the short window of text is:
vx=∑iαivi
wherein v isiThe coding vector represents the ith word in the word sequence corresponding to the word x in the short window of the text;
coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
5. The method of claim 4, wherein the training step operates to:
randomly generating a m-dimensional vector v for each word after word segmentation in the corpuswAnd calculating a log-likelihood function of the corpus sequence s:
wherein, L (w)t) Is composed of a target word wtThe log-likelihood function with the context conditional probability is calculated,
and the target word wtThe conditional probability of the corresponding context can be calculated by the softmax function,
wherein the content of the first and second substances,denotes the transpose of the k-th hidden layer vector, k 1,2,the output vector of the hidden layer for the style semantic model,for the output vector of the radical level semantic model hidden layer,a word vector representing the target word,the word vector representing the context is trained and optimized by an objective function L(s), and model parameters are updated to obtain a final Chinese word vector model vw∈Rm。
6. An apparatus for network context text recognition, the apparatus comprising:
the modeling unit is used for constructing a style semantic model based on the text long window and constructing a component-level semantic model based on the text short window;
the training unit is used for training a corpus of the network context based on the style semantic model vector model and the radical semantic model to obtain a Chinese word vector model of the network context;
and the recognition unit is used for recognizing the input text of the network context by using the Chinese word vector model of the network context and outputting a recognition result.
7. The apparatus according to claim 6, wherein the corpus sequence obtained by segmenting any corpus s in the corpus is s ═ { w ═ w1,…wt-1,wt,wt+1,…wNIn which wtSetting w for the t-th word in the sequence after word segmentationtT is 1, … N, and N is the total word number in the corpus sequence; with the target word wtConstructing a text window for the center, and defining a text short window as follows:
wherein d issRepresenting words in a short window of text to a target word wtLet the distance threshold of the text short window be theta, windowsRepresenting the target word w by the neighborhoodtA set of words consisting of the context of (a);
define a text long window as
Wherein d islRepresenting words in a long window of text to a target word wtThe minimum value of the distance (c) is theta +1, the maximum value is β -N, windowlRepresenting target words w by distancetContext components that are far away and do not include content in the short window of text.
8. The apparatus according to claim 7, wherein the process of constructing the style semantic model based on the long text window is: window for long textlComputing hidden layer vectors as input to CBOW
9. The apparatus according to claim 7, wherein the process of constructing the component-level semantic model based on the text short window is:
Converting the radical r into Chinese character r with corresponding semantic meaning by character escape dictionary*Obtaining the word sequence x after the short text and the escape of the radicals,
and (3) carrying out weighted fusion coding on the Chinese characters and the radicals corresponding to the words by adopting a self-attention mechanism, wherein the calculation formula of the self-attention weight alpha is as follows:
αi=softmax(f(xTxi))
wherein x isiRepresenting the short text corresponding to the ith word in the short text window and the character sequence after the escape of the radical, i ∈ { t + -ds|1<ds≤θ},xTIs xiThe similarity calculation function f adopts a dot product form;
the code vector for each word within the short window of text is:
vx=∑iαivi
wherein v isiThe coding vector represents the ith word in the word sequence corresponding to the word x in the short window of the text;
coding vector v to be derived from attentionxInputting CBOW, calculating output vector of hidden layer
10. The apparatus of claim 9, wherein the training unit performs the operations of:
randomly generating a m-dimensional vector v for each word after word segmentation in the corpuswAnd calculating a log-likelihood function of the corpus sequence s:
wherein, L (w)t) Is composed of a target word wtThe log-likelihood function with the context conditional probability is calculated,
and the target word wtThe conditional probability of the corresponding context can be calculated by the softmax function,
wherein the content of the first and second substances,denotes the transpose of the k-th hidden layer vector, k 1,2,the output vector of the hidden layer for the style semantic model,for the output vector of the radical level semantic model hidden layer,a word vector representing the target word,the word vector representing the context is trained and optimized by an objective function L(s), and model parameters are updated to obtain a final Chinese word vector model vw∈Rm。
11. A computer-readable storage medium, characterized in that the storage medium has stored thereon computer program code which, when executed by a computer, performs the method of any of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010396183.9A CN111581970B (en) | 2020-05-12 | 2020-05-12 | Text recognition method, device and storage medium for network context |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010396183.9A CN111581970B (en) | 2020-05-12 | 2020-05-12 | Text recognition method, device and storage medium for network context |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111581970A true CN111581970A (en) | 2020-08-25 |
CN111581970B CN111581970B (en) | 2023-01-24 |
Family
ID=72112139
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010396183.9A Active CN111581970B (en) | 2020-05-12 | 2020-05-12 | Text recognition method, device and storage medium for network context |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111581970B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112765989A (en) * | 2020-11-17 | 2021-05-07 | 中国信息通信研究院 | Variable-length text semantic recognition method based on representation classification network |
CN113190643A (en) * | 2021-04-13 | 2021-07-30 | 安阳师范学院 | Information generation method, terminal device, and computer-readable medium |
CN113408289A (en) * | 2021-06-29 | 2021-09-17 | 广东工业大学 | Multi-feature fusion supply chain management entity knowledge extraction method and system |
CN113449490A (en) * | 2021-06-22 | 2021-09-28 | 上海明略人工智能(集团)有限公司 | Document information summarizing method, system, electronic equipment and medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140278341A1 (en) * | 2013-03-13 | 2014-09-18 | Red Hat, Inc. | Translation assessment |
CN107273355A (en) * | 2017-06-12 | 2017-10-20 | 大连理工大学 | A kind of Chinese word vector generation method based on words joint training |
CN107305768A (en) * | 2016-04-20 | 2017-10-31 | 上海交通大学 | Easy wrongly written character calibration method in interactive voice |
CN107977361A (en) * | 2017-12-06 | 2018-05-01 | 哈尔滨工业大学深圳研究生院 | The Chinese clinical treatment entity recognition method represented based on deep semantic information |
CN109918677A (en) * | 2019-03-21 | 2019-06-21 | 广东小天才科技有限公司 | A kind of method and system of English word semanteme parsing |
CN111091001A (en) * | 2020-03-20 | 2020-05-01 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for generating word vector of word |
-
2020
- 2020-05-12 CN CN202010396183.9A patent/CN111581970B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140278341A1 (en) * | 2013-03-13 | 2014-09-18 | Red Hat, Inc. | Translation assessment |
CN107305768A (en) * | 2016-04-20 | 2017-10-31 | 上海交通大学 | Easy wrongly written character calibration method in interactive voice |
CN107273355A (en) * | 2017-06-12 | 2017-10-20 | 大连理工大学 | A kind of Chinese word vector generation method based on words joint training |
CN107977361A (en) * | 2017-12-06 | 2018-05-01 | 哈尔滨工业大学深圳研究生院 | The Chinese clinical treatment entity recognition method represented based on deep semantic information |
CN109918677A (en) * | 2019-03-21 | 2019-06-21 | 广东小天才科技有限公司 | A kind of method and system of English word semanteme parsing |
CN111091001A (en) * | 2020-03-20 | 2020-05-01 | 支付宝(杭州)信息技术有限公司 | Method, device and equipment for generating word vector of word |
Non-Patent Citations (2)
Title |
---|
李枫林等: "词向量语义表示研究进展", 《情报科学》 * |
胡可奇: "基于深度学习的短文本分类研究", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112765989A (en) * | 2020-11-17 | 2021-05-07 | 中国信息通信研究院 | Variable-length text semantic recognition method based on representation classification network |
CN113190643A (en) * | 2021-04-13 | 2021-07-30 | 安阳师范学院 | Information generation method, terminal device, and computer-readable medium |
CN113449490A (en) * | 2021-06-22 | 2021-09-28 | 上海明略人工智能(集团)有限公司 | Document information summarizing method, system, electronic equipment and medium |
CN113449490B (en) * | 2021-06-22 | 2024-01-26 | 上海明略人工智能(集团)有限公司 | Document information summarizing method, system, electronic equipment and medium |
CN113408289A (en) * | 2021-06-29 | 2021-09-17 | 广东工业大学 | Multi-feature fusion supply chain management entity knowledge extraction method and system |
CN113408289B (en) * | 2021-06-29 | 2024-04-16 | 广东工业大学 | Multi-feature fusion supply chain management entity knowledge extraction method and system |
Also Published As
Publication number | Publication date |
---|---|
CN111581970B (en) | 2023-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110502749B (en) | Text relation extraction method based on double-layer attention mechanism and bidirectional GRU | |
CN110609891B (en) | Visual dialog generation method based on context awareness graph neural network | |
CN108984526B (en) | Document theme vector extraction method based on deep learning | |
CN111581970B (en) | Text recognition method, device and storage medium for network context | |
CN108875807B (en) | Image description method based on multiple attention and multiple scales | |
CN108763284B (en) | Question-answering system implementation method based on deep learning and topic model | |
WO2021072875A1 (en) | Intelligent dialogue generation method, device, computer apparatus and computer storage medium | |
CN107943784B (en) | Relationship extraction method based on generation of countermeasure network | |
CN111738251B (en) | Optical character recognition method and device fused with language model and electronic equipment | |
Zhang et al. | Radical analysis network for zero-shot learning in printed Chinese character recognition | |
CN110647612A (en) | Visual conversation generation method based on double-visual attention network | |
US20220343139A1 (en) | Methods and systems for training a neural network model for mixed domain and multi-domain tasks | |
CN108985370B (en) | Automatic generation method of image annotation sentences | |
CN111816169B (en) | Method and device for training Chinese and English hybrid speech recognition model | |
US11475225B2 (en) | Method, system, electronic device and storage medium for clarification question generation | |
CN109308316B (en) | Adaptive dialog generation system based on topic clustering | |
CN110991290A (en) | Video description method based on semantic guidance and memory mechanism | |
CN117234341B (en) | Virtual reality man-machine interaction method and system based on artificial intelligence | |
CN114596844A (en) | Acoustic model training method, voice recognition method and related equipment | |
CN112131367A (en) | Self-auditing man-machine conversation method, system and readable storage medium | |
CN110968725A (en) | Image content description information generation method, electronic device, and storage medium | |
CN114781375A (en) | Military equipment relation extraction method based on BERT and attention mechanism | |
CN115617955A (en) | Hierarchical prediction model training method, punctuation symbol recovery method and device | |
CN115064154A (en) | Method and device for generating mixed language voice recognition model | |
CN112528989B (en) | Description generation method for semantic fine granularity of image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |