CN113806540A - Text labeling method and device, electronic equipment and storage medium - Google Patents

Text labeling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113806540A
CN113806540A CN202111098192.0A CN202111098192A CN113806540A CN 113806540 A CN113806540 A CN 113806540A CN 202111098192 A CN202111098192 A CN 202111098192A CN 113806540 A CN113806540 A CN 113806540A
Authority
CN
China
Prior art keywords
text
loss value
texts
preset
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111098192.0A
Other languages
Chinese (zh)
Other versions
CN113806540B (en
Inventor
史文鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Bank Co Ltd
Original Assignee
Ping An Bank Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Bank Co Ltd filed Critical Ping An Bank Co Ltd
Priority to CN202111098192.0A priority Critical patent/CN113806540B/en
Publication of CN113806540A publication Critical patent/CN113806540A/en
Application granted granted Critical
Publication of CN113806540B publication Critical patent/CN113806540B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/35Clustering; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention relates to an artificial intelligence technology and a digital medical technology, and discloses a text labeling method, which comprises the following steps: constructing a first sequence set consisting of similar semantic texts, randomly masking the texts in the first sequence set to obtain a second sequence set, pre-training a pre-constructed semantic recognition model by using the second sequence set to obtain a predicted masked text and a text similarity of a predicted sequence, exiting the pre-training when a first loss value between the predicted masked text and an original text corresponding to the first sequence set and a second loss value between the text similarity of the predicted sequence and the text similarity corresponding to the first sequence set meet a preset condition, and labeling the text to be labeled by using the semantic recognition model after the pre-training. The invention also provides a text labeling device, equipment and a medium. The method and the device can solve the problem of inconsistent labels of similar semantic texts and improve the accuracy of text labeling.

Description

Text labeling method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of artificial intelligence and digital medical treatment, in particular to a text labeling method and device, electronic equipment and a computer readable storage medium.
Background
Some APP with a large user amount generally provide news information service, such news information is mostly purchased from outside, and the problem that the news information has no label or is disordered exists, which brings difficulty to the examination and release of the news information.
Currently, there are two main methods for tagging news information:
one is to label news data in full amount by manpower, which is high in labor cost and easy to cause the phenomenon of label classification omission or error.
In another method, a Vector Space Model (VSM) is used for converting news information from a text into vectors in a Vector Space, the similarity between different news information is measured by calculating the similarity between the vectors, then a Kmeans method is used for clustering the news information, and a label of the news information is generated according to a clustering result. However, when the method is used for texts with the same semantics and different semantics, the accuracy rate of labeling the texts is low, and the labels corresponding to the texts with the same semantics are inconsistent, so that the accuracy rate of labeling the texts such as news information and the like is to be improved.
Disclosure of Invention
The invention provides a text labeling method and device and a computer readable storage medium, and mainly aims to improve the text labeling accuracy.
In order to achieve the above object, the present invention provides a text labeling method, which includes:
constructing a first sequence set composed of similar semantic texts, and performing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
pre-training shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model by utilizing the second sequence set to obtain a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence;
calculating a first loss value between the predicted shielding text and the corresponding original text in the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
combining the first loss value and the second loss value into the pre-trained output loss value, and judging whether the output loss value meets a preset condition;
if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model;
if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
and performing labeling operation on the text to be labeled by using the pre-trained semantic recognition model to obtain the label of the text to be labeled.
Optionally, the constructing a first sequence set composed of similar semantic text comprises:
acquiring a similar semantic text set from a preset similar semantic text library;
and combining every two similar semantic texts in the similar semantic text sets into a sequence according to a preset combination mode until all texts in the similar semantic text sets are combined to obtain a first sequence set consisting of the similar semantic texts.
Optionally, the performing a random masking operation on the text in the first sequence set includes:
randomly selecting a text with a preset shielding text proportion in the first sequence set to obtain a text to be shielded;
covering the text with a preset coverage ratio in the text to be shielded by using a preset mask to obtain a mask covered text;
replacing the text with a preset replacement ratio in the text to be shielded by using the randomly generated text to obtain a replacement text;
and summarizing the mask covering text and the replacing text to obtain a mask text.
Optionally, the pre-training of masking text prediction and text similarity calculation on the pre-constructed semantic recognition model includes:
performing text feature extraction on the second sequence set by using a pre-constructed semantic recognition model to obtain a text feature set;
performing activation calculation on the text feature set by using a preset activation function to obtain a predicted shielding text;
combining the predicted occluded text with unasked text in the second set of sequences into a predicted sequence;
and performing word vector conversion on each character in the prediction sequence, and calculating the prediction similarity of similar semantic texts in the prediction sequence according to the word vector corresponding to each character.
Optionally, the calculating the prediction similarity of the similar semantic texts in the prediction sequence includes:
respectively calculating a word vector mean value corresponding to each similar semantic text in the prediction sequence;
and calculating the absolute difference value of the mean value of the word vectors among all similar semantic texts, and taking the absolute difference value as the prediction similarity of the prediction sequence.
Optionally, the combining the first loss value and the second loss value into the pre-trained loss value comprises:
performing weighting operation on the first loss value by using a preset first loss weight to obtain a weighted first loss value;
performing weighting operation on the second loss value by using a preset second loss weight to obtain a weighted second loss value;
and adding the weighted first loss value and the weighted second loss value to obtain the pre-trained output loss value.
Optionally, the text to be labeled is labeled by using the pre-trained semantic recognition model, and the method further includes:
performing text feature extraction on the text to be labeled by using the pre-trained semantic recognition model to obtain a text feature set;
and calculating probability values between the text feature set and a plurality of preset text labels by using a pre-trained activation function, and selecting the labels corresponding to the probability values larger than a preset probability threshold value as the labels of the text to be labeled.
In order to solve the above problem, the present invention further provides a text labeling apparatus, including:
the pre-training corpus construction module is used for constructing a first sequence set consisting of similar semantic texts and executing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
the pre-training prediction module is used for performing pre-training of shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model by utilizing the second sequence set to obtain a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence;
the pre-training loss calculation module is used for calculating a first loss value between the predicted shielding text and the original text corresponding to the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
a pre-training end judgment module, configured to combine the first loss value and the second loss value into the pre-training output loss value, and judge whether the output loss value meets a preset condition; if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model; if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
and the pre-training model application module is used for executing the labeling operation on the text to be labeled by utilizing the semantic recognition model which completes the pre-training to obtain the label of the text to be labeled.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the text labeling method.
In order to solve the above problem, the present invention further provides a computer-readable storage medium, in which at least one instruction is stored, and the at least one instruction is executed by a processor in an electronic device to implement the text labeling method described above.
According to the method, pre-training of shielding text prediction and text similarity calculation is carried out on a pre-constructed semantic recognition model by using a sequence set which is composed of similar semantic texts and contains shielding texts, so that the pre-trained semantic recognition model has the recognition and generation capabilities on the similar semantic texts, and then the pre-trained semantic recognition model is used for labeling the text to be labeled, so that the problem of inconsistent labels of the similar semantic texts can be solved, and the accuracy of text labeling is improved.
Drawings
Fig. 1 is a schematic flow chart of a text labeling method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating a detailed implementation of one step in the text labeling method shown in FIG. 1;
FIG. 3 is a flowchart illustrating a detailed implementation of one step in the text labeling method shown in FIG. 1;
FIG. 4 is a flowchart illustrating a detailed implementation of one step in the text labeling method shown in FIG. 1;
FIG. 5 is a functional block diagram of a text labeling apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device for implementing the text labeling method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a text labeling method. The execution subject of the text labeling method includes, but is not limited to, at least one of electronic devices such as a server and a terminal that can be configured to execute the method provided by the embodiments of the present application. In other words, the text labeling method may be executed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server side can be an independent server, and can also be a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, Network service, cloud communication, middleware service, domain name service, security service, Content Delivery Network (CDN), big data and an artificial intelligence platform.
Referring to fig. 1, a schematic flow chart of a text labeling method according to an embodiment of the present invention is shown. In this embodiment, the text labeling method includes:
s1, constructing a first sequence set composed of similar semantic texts, and performing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
in the embodiment of the invention, the similar semantic text refers to text with different content organization forms but the same or similar expression meanings, for example, the two texts of "bank card transaction" and "debit card application" are text with similar semantics.
Preferably, in an embodiment of the present invention, each sequence in the first sequence set includes two texts with similar semantics.
In the embodiment of the present invention, the random masking operation is based on a BERT model, and a common Processing manner for a corpus in Natural Language Processing (NLP), that is, a part of characters of a text with similar semantics in the sequence set are randomly selected, and the selected characters are covered or replaced with other characters, so that the robustness of the BERT model can be improved.
In detail, referring to fig. 2, the S1 includes:
s11, acquiring a similar semantic text set from a preset similar semantic text library;
s12, combining every two similar semantic texts in the similar semantic text set into a sequence according to a preset combination mode until all texts in the similar semantic text set are combined to obtain a first sequence set consisting of the similar semantic texts;
s13, randomly selecting texts with preset shielding text proportions in the first sequence set to obtain texts to be shielded;
s14, covering the text with the preset covering proportion in the text to be covered by using a preset mask to obtain a mask covered text;
s15, replacing the text with a preset replacement ratio in the text to be shielded by using the randomly generated text to obtain a replacement text;
and S16, summarizing the mask covered text and the replacement text to obtain a mask text.
In the embodiment of the invention, the preset similar semantic text base can be an intelligent question-answer knowledge base in a service industry or a government convenience system, for example, in an intelligent question-answer knowledge base of a bank, one standard question usually corresponds to a plurality of similar questions, for example, the standard question is 'credit card issuing amount', the similar questions can be 'how much credit card amount you get "," how much credit card amount I newly applies for "," how much credit card lowest amount you get, how much amount I can reach with the card' and the like. The standard question and any similar question are similar semantic texts.
In the embodiment of the invention, the preset combination mode can be a permutation combination mode, or the standard problem text and each similar problem are combined into a sequence one by one, and then the standard problem and the front and back positions of the similar problems are combined into a new sequence through conversion.
In the embodiment of the present invention, it is assumed that the preset ratio of the masked text is 15%, the preset ratio of the covering is 80%, the preset ratio of the replacing is 10%, and the preset mask is [ mask ]. Namely, the random masking operation is performed on 15% of the texts in the sequence set, and further, when the random masking operation is performed on the 15% of the texts, 80% of the texts to be masked are directly covered by [ mask ], 10% of the texts to be masked are replaced by other randomly generated characters, and 10% of the texts to be masked are kept unchanged.
For example, in the "bank card transaction debit card application" sequence, assuming that "transaction" is the text to be masked, "in the case of the above-described overlay," bank card transaction debit card application "is converted into" bank card [ mask ] [ mask ] debit card application, "in the case of the above-described replacement," bank card transaction debit card application "is converted into" bank card pipelining debit card application.
In the embodiment of the invention, the random masking operation is performed on the texts with similar semantics in the sequence set, so that the sequence set has both the original text and the text covered by the preset me codes, and a small amount of texts are randomly replaced to generate a small amount of noise, thereby improving the learning ability of the training model.
S2, pre-training shielding text prediction and text similarity calculation is carried out on the pre-constructed semantic recognition model by utilizing the second sequence set, and a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence are obtained;
in the embodiment of the invention, the pre-constructed semantic recognition model can adopt a semantic recognition model constructed based on BERT or a UniLM semantic recognition model, compared with the BERT-based semantic recognition model, the UniLM semantic recognition model can not only solve the problem of natural language understanding, but also solve the problem of natural language generation, and is a pre-training model which can read and automatically generate texts.
In the embodiment of the invention, because the text content of the second sequence set is composed of natural language, if the text content is directly analyzed, a large amount of computing resources are occupied, and the analysis efficiency is low, therefore, the text content can be converted into a text vector matrix, and further the text content expressed by the natural language is converted into a numerical form.
Preferably, before the pre-training of the masking text prediction and the text similarity calculation on the pre-constructed semantic recognition model, the method further includes: adding separators at adjacent positions of the similar semantic texts of each sequence; adding a separator at the end position of the text of each sequence; performing word vector conversion on each sequence to obtain a word vector set; and executing position coding operation on the word vector set to obtain a word vector set embedded with position information.
For example, the sequence consists of two similar semantic texts "bank card transaction" and "debit card application", with a start symbol added at the beginning of the text of each of the sequences, being "[ CLS ] bank card transaction debit card application". And adding separators to adjacent positions of the similar semantic texts of each sequence to process [ SEP ] debit card application for the [ CLS ] bank card. And adding a separator at the end position of the text of each sequence to apply for [ SEP ] for [ CLS ] bank card transaction [ SEP ] debit card.
In the embodiment of the present invention, the Word vector conversion refers to converting each character in the sequence set into a Word vector, the Word vector set records Word information of each character, each character in the masked sequence set can be converted into a Word vector through a preset Word vector conversion model, so as to obtain a Word vector set, and the preset Word vector conversion model can adopt Word vector classical models such as Word2vec, glove, Elmo, and the like. Position information Embedding can be performed using position Embedding based on Bert.
It can be understood that, in general, the same word appears in different positions in the text, and the expressed meaning is different, therefore, the position information of each word in the text needs to be embedded into the word vector set, and a word vector set with embedded position information is obtained.
In an embodiment of the present invention, the preset position encoding formula includes:
Figure BDA0003269798270000081
Figure BDA0003269798270000082
where pos represents the position number of each word in the text, e.g., (1,2,3,4, n), i represents the position in the word vector, e.g., (0,1,2,3, a.
In detail, referring to fig. 3, the S2 includes:
s21, extracting text features of the second sequence set by using a pre-constructed semantic recognition model to obtain a text feature vector set;
s22, performing activation calculation on the text feature vector set by using a preset activation function to obtain a predicted occlusion text;
s23, combining the predicted occluded text and the unoccluded text in the second sequence set into a predicted sequence;
and S24, performing word vector conversion on each character in the prediction sequence, and calculating the prediction similarity of the similar semantic texts in the prediction sequence according to the word vector corresponding to each character.
In the embodiment of the present invention, for example, the sequence including the mask text is a "bank card [ mask ] [ mask ] debit card application", where [ mask ] [ mask ] is the mask text, and assuming that the predicted mask text is "account opening", the pre-sequencing is listed as a "bank card account opening debit card application".
In this embodiment of the present invention, the extracting text features from the sequence set including the masked text further includes: converting the word vector set embedded with the position information into a position vector matrix; converting the text features in the position vector matrix into a text feature vector incidence matrix by using a multi-head attention mechanism in the pre-constructed semantic recognition model; connecting a position vector matrix set and the text characteristic vector incidence matrix by using a residual connecting layer in the pre-constructed semantic recognition model to obtain a text characteristic vector close incidence matrix; and performing dimension reduction processing on the text characteristic vector close association matrix by using a full connection layer in the pre-constructed semantic recognition model to obtain a text characteristic vector matrix.
In detail, the calculating the prediction similarity of the similar semantic texts in the prediction sequence comprises: respectively calculating a word vector mean value corresponding to each similar semantic text in the prediction sequence; and calculating the absolute difference value of the mean value of the word vectors among all similar semantic texts, and taking the absolute difference value as the prediction similarity of the prediction sequence.
S3, calculating a first loss value between the predicted shielding text and the corresponding original text in the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
in this embodiment of the present invention, the sequence corresponding to the second sequence set is composed of two similar semantic texts, and therefore, in this embodiment of the present invention, the loss function includes two parts, namely, the preset first loss function and the preset second loss function, the preset first loss function is used to calculate a loss value between the predicted masked text and the original text corresponding to the first sequence set, and the preset second loss function is used to calculate a loss value between a similarity of two similar semantic texts in the predicted sequence and a similarity of two similar semantic texts in the original sequence corresponding to the first sequence set.
The preset first loss function includes:
Figure BDA0003269798270000091
in the preset first loss function, num is the number of sequences in the second sequence set, pre is the predicted occlusion text, grt is the original text corresponding to the first sequence set, and i is the ith sequence in the second sequence set.
The preset second loss function includes:
Figure BDA0003269798270000092
V=Rb×d
Figure BDA0003269798270000093
Figure BDA0003269798270000094
Figure BDA0003269798270000095
in the predetermined second loss function, V ═ Rb×dNormalizing V to obtain the vector matrix output in the one-time pre-training, b being the number of sequences in the one-time pre-training, d being the preset occlusion text proportion
Figure BDA0003269798270000096
The above-mentioned
Figure BDA0003269798270000097
A similarity matrix of b x b, y (x)i) For the similarity of two similar semantic texts in the ith sequence in the second sequence set,
Figure BDA0003269798270000101
is the first sequenceThe similarity of two similar semantic texts.
S4, combining the first loss value and the second loss value into the pre-trained output loss value, and judging whether the output loss value meets a preset condition;
in the embodiment of the present invention, the first loss value and the second loss value have different calculated loss objects and different weights, and therefore, the first loss value and the second loss value need to be weighted.
In detail, referring to fig. 4, the S4 includes:
s41, performing weighting operation on the first loss value by using a preset first loss weight to obtain a weighted first loss value;
s42, performing weighting operation on the second loss value by using a preset second loss weight to obtain a weighted second loss value;
and S43, adding the weighted first loss value and the weighted second loss value to obtain the pre-trained output loss value.
In an embodiment of the present invention, the output loss value includes:
LOSS=αLoss1+βLoss2
the output LOSS value is LOSS, the first LOSS function is LOSS1, the second LOSS function is LOSS2, and α and β are the preset first LOSS weight and the preset second LOSS weight, respectively, and can be adjusted according to actual conditions.
S5, if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of carrying out shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model;
in this embodiment of the present invention, the preset condition may be a specified loss threshold, and when the output loss value of the pre-training is greater than or equal to the loss threshold, it indicates that the semantic recognition capability of the language recognition model still needs to be improved, and further parameter adjustment is required to perform new pre-training.
S6, if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
in the embodiment of the present invention, when the output loss value is smaller than the loss threshold, the pre-training is exited.
S7, performing labeling operation on the text to be labeled by using the pre-trained semantic recognition model to obtain the label of the text to be labeled.
In the embodiment of the invention, the pre-trained semantic recognition model has the capability of recognizing and generating similar semantic texts, and the pre-trained semantic recognition model is utilized to perform semantic recognition on the text to be labeled so as to extract the text features of the text to be labeled, thereby realizing the mapping between the text features and the text labels.
The text to be labeled may be news text, medical text, such as doctor prescription text, etc.
In detail, the performing, by using the pre-trained semantic recognition model, a labeling operation on a text to be labeled includes: performing text feature extraction on the text to be labeled by using the pre-trained semantic recognition model to obtain a text feature set; and calculating probability values between the text feature set and a plurality of preset text labels by using a pre-trained activation function, and selecting the labels corresponding to the probability values larger than a preset probability threshold value as the labels of the text to be labeled.
In the embodiment of the present invention, the preset tag is a classification tag for text information, for example, for multimedia news, the preset tag may be a tag for sports, automobiles, make-up, financing, education, medical treatment, and the like.
In the embodiment of the invention, the preset probability threshold value can be adjusted according to actual conditions.
In detail, the activation function includes, but is not limited to, a softmax activation function, a sigmoid activation function, a relu activation function.
In one embodiment of the present invention, the relative probability value may be calculated using the activation function as follows:
Figure BDA0003269798270000111
wherein p (a | x) is the relative probability between the text feature x and the preset text label a, w _ a is the weight vector of the text label a, T is the transposition operator, exp is the expected operator, and a is the number of the preset text labels.
In another embodiment of the present invention, the preset activation function may be replaced by a decision tree algorithm or a K-means clustering algorithm. And classifying or clustering the text feature set by using the decision number algorithm or the clustering algorithm to realize the mapping between the text features and the text labels.
According to the method, pre-training of shielding text prediction and text similarity calculation is carried out on a pre-constructed semantic recognition model by using a sequence set which is composed of similar semantic texts and contains shielding texts, so that the pre-trained semantic recognition model has the recognition and generation capabilities on the similar semantic texts, and then the pre-trained semantic recognition model is used for labeling the text to be labeled, so that the problem of inconsistent labels of the similar semantic texts can be solved, and the accuracy of text labeling is improved.
Fig. 5 is a functional block diagram of a text labeling apparatus according to an embodiment of the present invention.
The text labeling apparatus 100 of the present invention can be installed in an electronic device. According to the implemented functions, the text labeling apparatus 100 may include a pre-training corpus constructing module 101, a pre-training predicting module 102, a pre-training loss calculating module 103, a pre-training end determining module 104, and a pre-training model applying module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the pre-training corpus constructing module 101 is configured to construct a first sequence set composed of similar semantic texts, and perform a random masking operation on the texts in the first sequence set to obtain a second sequence set including masked texts;
the pre-training prediction module 102 is configured to perform pre-training of masking text prediction and text similarity calculation on the pre-constructed semantic recognition model by using the second sequence set, so as to obtain a prediction sequence including a predicted masking text and prediction similarity of similar semantic texts in the prediction sequence;
the pre-training loss calculation module 103 is configured to calculate a first loss value between the predicted masked text and the original text corresponding to the first sequence set by using a preset first loss function, and calculate a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
the pre-training end determining module 104 is configured to combine the first loss value and the second loss value into the pre-training output loss value, and determine whether the output loss value meets a preset condition; if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model; if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
the pre-training model application module 105 is configured to perform a labeling operation on a text to be labeled by using the semantic recognition model subjected to pre-training, so as to obtain a label of the text to be labeled.
Fig. 6 is a schematic structural diagram of an electronic device for implementing a text labeling method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a text tagging program, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of a text labeling program, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., text tagging programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 6 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 6 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The text-tagging program stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, enable:
constructing a first sequence set composed of similar semantic texts, and performing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
pre-training shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model by utilizing the second sequence set to obtain a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence;
calculating a first loss value between the predicted shielding text and the corresponding original text in the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
combining the first loss value and the second loss value into the pre-trained output loss value, and judging whether the output loss value meets a preset condition;
if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model;
if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
and performing labeling operation on the text to be labeled by using the pre-trained semantic recognition model to obtain the label of the text to be labeled.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, which is not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
constructing a first sequence set composed of similar semantic texts, and performing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
pre-training shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model by utilizing the second sequence set to obtain a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence;
calculating a first loss value between the predicted shielding text and the corresponding original text in the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
combining the first loss value and the second loss value into the pre-trained output loss value, and judging whether the output loss value meets a preset condition;
if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model;
if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
and performing labeling operation on the text to be labeled by using the pre-trained semantic recognition model to obtain the label of the text to be labeled.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
The embodiment of the application can acquire and process related data based on an artificial intelligence technology. Among them, Artificial Intelligence (AI) is a theory, method, technique and application system that simulates, extends and expands human Intelligence using a digital computer or a machine controlled by a digital computer, senses the environment, acquires knowledge and uses the knowledge to obtain the best result.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A text labeling method is characterized by comprising the following steps:
constructing a first sequence set composed of similar semantic texts, and performing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
pre-training shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model by utilizing the second sequence set to obtain a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence;
calculating a first loss value between the predicted shielding text and the corresponding original text in the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
combining the first loss value and the second loss value into the pre-trained output loss value, and judging whether the output loss value meets a preset condition;
if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model;
if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
and performing labeling operation on the text to be labeled by using the pre-trained semantic recognition model to obtain the label of the text to be labeled.
2. The method of text labeling of claim 1, wherein said constructing a first set of sequences comprised of similar semantic text comprises:
acquiring a similar semantic text set from a preset similar semantic text library;
and combining every two similar semantic texts in the similar semantic text sets into a sequence according to a preset combination mode until all texts in the similar semantic text sets are combined to obtain a first sequence set consisting of the similar semantic texts.
3. The method of claim 1, wherein said performing a random masking operation on text in said first set of sequences comprises:
randomly selecting a text with a preset shielding text proportion in the first sequence set to obtain a text to be shielded;
covering the text with a preset coverage ratio in the text to be shielded by using a preset mask to obtain a mask covered text;
replacing the text with a preset replacement ratio in the text to be shielded by using the randomly generated text to obtain a replacement text;
and summarizing the mask covering text and the replacing text to obtain a mask text.
4. The method for labeling texts according to claim 1, wherein the pre-training of masking text prediction and text similarity calculation for the pre-constructed semantic recognition model comprises:
performing text feature extraction on the second sequence set by using a pre-constructed semantic recognition model to obtain a text feature set;
performing activation calculation on the text feature set by using a preset activation function to obtain a predicted shielding text;
combining the predicted occluded text with unasked text in the second set of sequences into a predicted sequence;
and performing word vector conversion on each character in the prediction sequence, and calculating the prediction similarity of similar semantic texts in the prediction sequence according to the word vector corresponding to each character.
5. The method of claim 4, wherein said calculating the predicted similarity of similar semantic texts in said predicted sequence comprises:
respectively calculating a word vector mean value corresponding to each similar semantic text in the prediction sequence;
and calculating the absolute difference value of the mean value of the word vectors among all similar semantic texts, and taking the absolute difference value as the prediction similarity of the prediction sequence.
6. The method of text labeling of claim 1, wherein said combining the first loss value and the second loss value into the pre-trained loss value comprises:
performing weighting operation on the first loss value by using a preset first loss weight to obtain a weighted first loss value;
performing weighting operation on the second loss value by using a preset second loss weight to obtain a weighted second loss value;
and adding the weighted first loss value and the weighted second loss value to obtain the pre-trained output loss value.
7. The method of claim 1, wherein the labeling of the text to be labeled is performed using the pre-trained semantic recognition model, the method further comprising:
performing text feature extraction on the text to be labeled by using the pre-trained semantic recognition model to obtain a text feature set;
and calculating probability values between the text feature set and a plurality of preset text labels by using a pre-trained activation function, and selecting the labels corresponding to the probability values larger than a preset probability threshold value as the labels of the text to be labeled.
8. A text labeling apparatus, comprising:
the pre-training corpus construction module is used for constructing a first sequence set consisting of similar semantic texts and executing random masking operation on the texts in the first sequence set to obtain a second sequence set comprising masked texts;
the pre-training prediction module is used for performing pre-training of shielding text prediction and text similarity calculation on the pre-constructed semantic recognition model by utilizing the second sequence set to obtain a prediction sequence containing predicted shielding texts and prediction similarity of similar semantic texts in the prediction sequence;
the pre-training loss calculation module is used for calculating a first loss value between the predicted shielding text and the original text corresponding to the first sequence set by using a preset first loss function, and calculating a second loss value between the predicted similarity and the original similarity of the similar semantic text in the first sequence set by using a preset second loss function;
a pre-training end judgment module, configured to combine the first loss value and the second loss value into the pre-training output loss value, and judge whether the output loss value meets a preset condition; if the output loss value does not meet the preset condition, adjusting parameters of the semantic recognition model, and returning to the pre-training step of performing occlusion text prediction and text similarity calculation on the pre-constructed semantic recognition model; if the output loss value meets the preset condition, quitting the pre-training to obtain a semantic recognition model completing the pre-training;
and the pre-training model application module is used for executing the labeling operation on the text to be labeled by utilizing the semantic recognition model which completes the pre-training to obtain the label of the text to be labeled.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the text labeling method of any one of claims 1 to 7.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out a method for text labeling according to any one of claims 1 to 7.
CN202111098192.0A 2021-09-18 2021-09-18 Text labeling method, text labeling device, electronic equipment and storage medium Active CN113806540B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111098192.0A CN113806540B (en) 2021-09-18 2021-09-18 Text labeling method, text labeling device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111098192.0A CN113806540B (en) 2021-09-18 2021-09-18 Text labeling method, text labeling device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113806540A true CN113806540A (en) 2021-12-17
CN113806540B CN113806540B (en) 2023-08-08

Family

ID=78896083

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111098192.0A Active CN113806540B (en) 2021-09-18 2021-09-18 Text labeling method, text labeling device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113806540B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340552A (en) * 2023-01-06 2023-06-27 北京达佳互联信息技术有限公司 Label ordering method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111128137A (en) * 2019-12-30 2020-05-08 广州市百果园信息技术有限公司 Acoustic model training method and device, computer equipment and storage medium
CN112446207A (en) * 2020-12-01 2021-03-05 平安科技(深圳)有限公司 Title generation method and device, electronic equipment and storage medium
CN113157927A (en) * 2021-05-27 2021-07-23 中国平安人寿保险股份有限公司 Text classification method and device, electronic equipment and readable storage medium
CN113378970A (en) * 2021-06-28 2021-09-10 平安普惠企业管理有限公司 Sentence similarity detection method and device, electronic equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111128137A (en) * 2019-12-30 2020-05-08 广州市百果园信息技术有限公司 Acoustic model training method and device, computer equipment and storage medium
CN112446207A (en) * 2020-12-01 2021-03-05 平安科技(深圳)有限公司 Title generation method and device, electronic equipment and storage medium
CN113157927A (en) * 2021-05-27 2021-07-23 中国平安人寿保险股份有限公司 Text classification method and device, electronic equipment and readable storage medium
CN113378970A (en) * 2021-06-28 2021-09-10 平安普惠企业管理有限公司 Sentence similarity detection method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116340552A (en) * 2023-01-06 2023-06-27 北京达佳互联信息技术有限公司 Label ordering method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN113806540B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes
CN113626606B (en) Information classification method, device, electronic equipment and readable storage medium
CN113704429A (en) Semi-supervised learning-based intention identification method, device, equipment and medium
CN113157927A (en) Text classification method and device, electronic equipment and readable storage medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN114880449A (en) Reply generation method and device of intelligent question answering, electronic equipment and storage medium
CN113656690A (en) Product recommendation method and device, electronic equipment and readable storage medium
CN113806540B (en) Text labeling method, text labeling device, electronic equipment and storage medium
CN116401606A (en) Fraud identification method, device, equipment and medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN114943306A (en) Intention classification method, device, equipment and storage medium
CN114595321A (en) Question marking method and device, electronic equipment and storage medium
CN115221323A (en) Cold start processing method, device, equipment and medium based on intention recognition model
CN114219367A (en) User scoring method, device, equipment and storage medium
CN114780688A (en) Text quality inspection method, device and equipment based on rule matching and storage medium
CN114385815A (en) News screening method, device, equipment and storage medium based on business requirements
CN114548114A (en) Text emotion recognition method, device, equipment and storage medium
CN113626605A (en) Information classification method and device, electronic equipment and readable storage medium
CN112712797A (en) Voice recognition method and device, electronic equipment and readable storage medium
CN114462411B (en) Named entity recognition method, device, equipment and storage medium
CN114723523B (en) Product recommendation method, device, equipment and medium based on user capability image
CN111680513B (en) Feature information identification method and device and computer readable storage medium
CN115221875B (en) Word weight generation method, device, electronic equipment and storage medium
CN117195898A (en) Entity relation extraction method and device, electronic equipment and storage medium
CN114970501A (en) Text-based entity relationship extraction method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant