CN109918663A - A kind of semantic matching method, device and storage medium - Google Patents

A kind of semantic matching method, device and storage medium Download PDF

Info

Publication number
CN109918663A
CN109918663A CN201910160971.5A CN201910160971A CN109918663A CN 109918663 A CN109918663 A CN 109918663A CN 201910160971 A CN201910160971 A CN 201910160971A CN 109918663 A CN109918663 A CN 109918663A
Authority
CN
China
Prior art keywords
word
semantic matching
feature
text
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910160971.5A
Other languages
Chinese (zh)
Other versions
CN109918663B (en
Inventor
鲁亚楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910160971.5A priority Critical patent/CN109918663B/en
Publication of CN109918663A publication Critical patent/CN109918663A/en
Application granted granted Critical
Publication of CN109918663B publication Critical patent/CN109918663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

The embodiment of the invention discloses a kind of semantic matching method, device and storage mediums, are applied to technical field of information processing.Semantic matches device determines the word of text and target text to be matched to feature for the first word feature of text to be matched and the second word feature of target text;Semantic matches vector is converted to feature, the first word feature and the second word feature by word, so that its format is identical as the format of input data in preset semantic classification model, according to semantic matches vector and semantic classification model, the similarity of text and target text to be matched can be determined.In this process, when determining semantic matches vector, directly feature etc. is obtained by the word of text to be matched and target text, save the time for the calculating that is multiplied between vector, simplify the process that text to be matched and target text match, so that realizing that the structure of semantic matches is also simplified, the training process to the structure for realizing semantic matches is also correspondingly simplified.

Description

Semantic matching method, device and storage medium
Technical Field
The present invention relates to the field of information processing technologies, and in particular, to a semantic matching method, apparatus, and storage medium.
Background
The semantic matching technology can be applied to user interaction systems such as a search engine, a question-answering system and the like, specifically, a user can input texts or voices through the user interaction system, so that the user interaction system can perform semantic matching on the texts or voices input by the user to determine the intention of the texts or voices input by the user, and feedback is performed to the user according to the result of the semantic matching.
The existing semantic matching method is mainly based on keyword matching, and in addition, the auxiliary judgment is carried out through word level information such as word weight and the like, the speed is high, the interpretability is strong, but the method does not process unknown words, and the calculation accuracy is low.
The other semantic matching method is mainly based on a learned semantic matching model, the semantic similarity of non-keyword hit can be calculated well, in the process, the semantic matching model needs to multiply word feature vectors of all texts respectively, and the like, so that the structure of the semantic matching model is complex, the semantic matching model needs to be trained by relying on a large amount of training data and a large amount of training time, and the training prediction speed is slow.
Disclosure of Invention
The embodiment of the invention provides a semantic matching method, a semantic matching device and a semantic matching storage medium, which are used for determining the similarity between a text to be matched and a target text according to the word pair characteristics of the text to be matched and the target text and a preset semantic classification model.
A first aspect of an embodiment of the present invention provides a semantic matching method, including:
acquiring first word features of a text to be matched and second word features of a target text;
determining word pair characteristics of the text to be matched and the target text according to the first word characteristics and the second word characteristics;
determining a semantic matching vector according to the word pair characteristics, the first word characteristics and the second word characteristics, wherein the format of the semantic matching vector is the same as that of input data in a preset semantic classification model;
and determining the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model.
A second aspect of the embodiments of the present invention provides a semantic matching apparatus, including:
the characteristic acquisition unit is used for acquiring first word characteristics of a text to be matched and second word characteristics of a target text;
the co-occurrence determining unit is used for determining the word pair characteristics of the text to be matched and the target text according to the first word characteristics and the second word characteristics;
the vector determining unit is used for determining a semantic matching vector according to the word pair characteristics, the first word characteristics and the second word characteristics, and the format of the semantic matching vector is the same as that of input data in a preset semantic classification model;
and the similarity unit is used for determining the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model.
A third aspect of the embodiments of the present invention provides a storage medium, where the storage medium stores a plurality of instructions, and the instructions are adapted to be loaded by a processor and execute the semantic matching method according to the first aspect of the embodiments of the present invention.
A fourth aspect of the embodiments of the present invention provides a server, including a processor and a storage medium, where the processor is configured to implement each instruction;
the storage medium is configured to store a plurality of instructions, which are used for being loaded by a processor and executing the semantic matching method according to the first aspect of the embodiment of the present invention.
As can be seen, in the method of this embodiment, the semantic matching device determines the word pair characteristics of the text to be matched and the target text for the first word characteristic of the text to be matched and the second word characteristic of the target text; and then converting the word pair characteristics, the first word characteristics and the second word characteristics into semantic matching vectors, so that the formats of the semantic matching vectors are the same as the formats of input data in a preset semantic classification model, and further determining the similarity between the text to be matched and the target text according to the semantic matching vectors and the semantic classification model. In the process, when the semantic matching vector is determined, the word pair characteristics of the text to be matched and the target text are directly obtained, and the word pair characteristics of the text to be matched and the target text are not required to be obtained through multiplication calculation between word characteristic vectors respectively corresponding to the text to be matched and the target text, so that the time of the multiplication calculation between the vectors is saved, the process of matching the text to be matched and the target text is simplified, the structure for realizing the semantic matching is simplified, and the training process of the structure for realizing the semantic matching is correspondingly simplified.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a diagram illustrating a semantic matching method according to an embodiment of the present invention;
FIG. 2 is a flow chart of a semantic matching method provided by an embodiment of the invention;
FIG. 3 is a flow diagram of a method for training a semantic matching model according to one embodiment of the invention;
FIG. 4 is a diagram illustrating a semantic matching method according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating determining similarity between a text to be matched and a target text according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a semantic matching apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of another semantic matching apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention provides a semantic matching method, which is shown in a reference figure 1 and mainly adopts the following method for semantic matching by a semantic matching device:
acquiring first word features of a text to be matched and second word features of a target text; determining word pair characteristics of the text to be matched and the target text according to the first word characteristics and the second word characteristics; determining a semantic matching vector according to the word pair characteristics, the first word characteristics and the second word characteristics, wherein the format of the semantic matching vector is the same as that of input data in a preset semantic classification model; and determining the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model.
The method of the embodiment can be applied to user interaction systems such as a search engine, a question-answering system and the like, and the semantic matching device is specifically the user interaction system.
In the process, when the semantic matching vector is determined, the word pair characteristics of the text to be matched and the target text are directly obtained, and the word pair characteristics of the text to be matched and the target text are not required to be obtained through multiplication calculation between word characteristic vectors respectively corresponding to the text to be matched and the target text, so that the time of the multiplication calculation between the vectors is saved, the process of matching the text to be matched and the target text is simplified, the structure for realizing the semantic matching is simplified, and the training process of the structure for realizing the semantic matching is correspondingly simplified.
An embodiment of the present invention provides a semantic matching method, which is a method executed by a semantic matching apparatus, and a flowchart is shown in fig. 2, where the method includes:
step 101, obtaining a first word feature of a text to be matched and a second word feature of a target text.
It is understood that the user may operate the semantic matching device such that the semantic matching device displays the user input interface, so that the user may input voice or text through the user input interface to initiate the semantic matching device to perform the semantic matching process.
If the user inputs voice through the user input interface, the semantic matching device can recognize the voice input by the user as a text and takes the recognized text as a text to be matched; if the user inputs the text through the user input interface, the semantic matching device can directly take the text input by the user as the text to be matched. And the semantic matching means will perform steps 101 to 104 for each of a plurality of target texts preset locally.
Specifically, when the first word feature of the text to be matched is obtained, word segmentation processing may be performed on the text to be matched, each word segmentation is used as one feature, and a combination of a plurality of words included in the text to be matched is the first word feature; the second word segmentation feature of the target text is similar to the first word segmentation feature, and can be obtained in advance and stored in the semantic matching device, and when the process of the embodiment is initiated, the second word feature is directly extracted from the local. It is understood that a word segmentation can be a word or a plurality of words, as long as a word or a combination of words can form a semantic meaning, for example, a name of a person, called "cyan" can form a semantic meaning, i.e., a word segmentation.
For example, if the text to be matched is "open WeChat friend circle", the resulting first word features may include "open", "WeChat", and "friend circle".
Step 102, determining word pair characteristics of the text to be matched and the target text according to the first word characteristics and the second word characteristics, wherein the word pair characteristics refer to characteristics formed by combining the word characteristics in the first word characteristics and the second word characteristics.
Specifically, if the first word feature obtained in step 101 includes n word features, the second word feature includes m word features, where n and m are both natural numbers greater than or equal to 1. When determining the word pair feature, the semantic matching device may obtain the first word pair feature and/or the second word pair feature, specifically:
combining each word feature in the n word features with the next word feature to form a first new word feature, and combining each word feature in the m word features with the next word feature to form a second new word feature; and then the first new word characteristic and the second new word characteristic are used for forming a first word pair characteristic.
Combining each word feature in the n word features with each word feature in the m word features to form a third new word feature; and forming a second word pair characteristic by using the third new word characteristic.
For example, the first word feature includes word features of: q. q.s1,q2,q3......,qnAnd the second word feature comprises: t is t1,t2,t3......,tmThus, the first word pair may be characterized as q1q2,q2q3,......,qn-1qn,t1t2,t2t3,......,tm-1tmAnd the second word pair characteristic may be q1t1,q1t2,......,qitj,......,qntm. Wherein q isiIs the ith word feature in the first word feature, tjIs the jth word feature in the second word features.
Step 103, determining a semantic matching vector according to the word pair characteristics, the first word characteristics and the second word characteristics, wherein the format of the semantic matching vector needs to be the same as the format of input data in a preset semantic classification model.
Specifically, the semantic matching device determines a semantic matching vector according to a preset strategy, where the preset strategy is specifically a strategy for determining the semantic matching vector according to the word pair feature, the first word feature and the second word feature, and the format of the determined semantic matching vector is the same as that of the input data in the semantic classification model, mainly means that the dimension requirement of the semantic matching vector is the same as that of the input data in the semantic matching model.
Specifically, when determining the semantic matching vector, the semantic matching device may first perform normalization processing on the word pair features and the fourth new word features respectively to form a normalized vector, where the fourth new word features are composed of the first word features and the second word features; and then calculating a semantic matching vector according to the normalized vector and a preset mathematical calculation formula.
The normalization processing means that the word pair features and the fourth new word features are converted into a range which can be calculated through a voice matching model, and specifically, the word pair features and the fourth new word features can be converted into dimension reduction (Embedding) space vectors respectively to serve as normalization vectors; or respectively coding the word pair characteristics and the fourth new word characteristics, and taking the obtained coded vector as a normalized vector.
The semantic matching device may use the normalized vector of the word pair feature and the average vector of the normalized vectors of the fourth new word feature as semantic matching vectors, respectively, when calculating the semantic matching vectors according to a preset mathematical calculation formula.
And 104, determining the similarity between the text to be matched and the target text according to the semantic matching vector and a preset semantic classification model.
Here, the preset semantic classification model is a machine learning model, and the operation logic of the preset semantic classification model is preset in the semantic matching device and can be obtained by training through a certain method. The semantic classification model is mainly used for determining the similarity between the text to be matched and the target text according to the semantic matching vector, so as to determine whether the text to be matched is matched with the target text.
Further, if the similarity between the determined text to be matched and the target text is greater than a preset value, the text to be matched is matched with the target text, and the semantic matching device can perform user feedback according to the intention of the target text; and if the determined similarity is smaller than or equal to the preset value, the text to be matched is not matched with the target text, and the steps 101 to 104 are executed aiming at the text to be matched and another target text until the target text matched with the text to be matched is found.
As can be seen, in the method of this embodiment, the semantic matching device determines the word pair characteristics of the text to be matched and the target text for the first word characteristic of the text to be matched and the second word characteristic of the target text; and then converting the word pair characteristics, the first word characteristics and the second word characteristics into semantic matching vectors, so that the formats of the semantic matching vectors are the same as the formats of input data in a preset semantic classification model, and further determining the similarity between the text to be matched and the target text according to the semantic matching vectors and the semantic classification model. In the process, when the semantic matching vector is determined, the word pair characteristics of the text to be matched and the target text are directly obtained, and the word pair characteristics of the text to be matched and the target text are not required to be obtained through multiplication calculation between word characteristic vectors respectively corresponding to the text to be matched and the target text, so that the time of the multiplication calculation between the vectors is saved, the process of matching the text to be matched and the target text is simplified, the structure for realizing the semantic matching is simplified, and the training process of the structure for realizing the semantic matching is correspondingly simplified.
It should be noted that the semantic matching method of the present embodiment may be implemented by a semantic matching model, and the semantic matching model is used for executing the above steps 101 to 104. Thus, in a specific embodiment, the semantic matching device may train the semantic matching model according to the following steps, and the flowchart is shown in fig. 3 and includes:
step 201, an initial semantic matching model is determined, the initial semantic matching model is used for determining semantic matching vectors of any two texts, and similarity between any two texts is calculated according to the determined semantic matching vectors.
It can be understood that, when determining the initial semantic matching model, the semantic matching device may determine the initial values of the fixed parameters in the multilayer structure and each layer mechanism included in the initial semantic matching model, and specifically may include a feature module and a semantic classification model, where the feature module is configured to receive any two texts, determine semantic matching vectors of the any two texts according to the methods of the above steps 101 to 103, and transmit the semantic matching vectors to the semantic classification model; the semantic classification model is used for determining the similarity between any two texts according to the semantic matching vector. The multilayer structure in the initial semantic matching model can be any one of the following algorithm structures: convolutional Neural Networks (CNN), K Nearest Neighbors (KNN), and the like.
The fixed parameters refer to fixed parameters used in the calculation process of each layer structure in the initial semantic matching model, and the parameters do not need to be assigned at any time, such as the parameters of weight, angle and the like.
Step 202, determining a training sample, the training sample comprising: the system comprises a plurality of groups of positive samples and a plurality of groups of negative samples, wherein each group of positive samples comprises two first training texts with matched semantemes and first labeling information with matched two first training texts, and each group of negative samples comprises two second training texts with unmatched semantemes and second labeling information with unmatched two second training texts.
And step 203, respectively determining the similarity between the two training texts in each group of positive samples and negative samples through the initial semantic matching model.
Specifically, when the similarity between two training texts in any one group of training samples is determined through the initial semantic matching model, the feature module may determine semantic matching vectors of the two training texts according to word features and word pair features respectively corresponding to the two training texts in a certain group of training samples, which is similar to the method for obtaining the semantic matching vectors corresponding to the text to be matched and the target text in the above steps 101 to 103, and is not repeated here; then, the semantic classification model determines the similarity between the two training texts according to the semantic matching vectors of the two training texts.
And step 204, adjusting the fixed parameter value in the initial semantic matching model according to the similarity determined by the initial semantic matching model in the step 203, the first labeling information and the second labeling information to obtain a final semantic matching model.
Specifically, the semantic matching device calculates a loss function related to the initial semantic matching model according to the similarity determined by the initial semantic matching model in step 203, and the first labeled information and the second labeled information, where the loss function is used to instruct the initial semantic matching model to calculate an error of the similarity between two training texts in each set of training samples.
Here, the loss function includes: the semantic matching model is used for representing the difference between the similarity between the first training texts in each group of positive samples determined according to the initial semantic matching model and the actual similarity (obtained according to the first label information) between the first training texts in each group of positive samples; and the similarity between the second training texts in each group of negative samples determined according to the initial semantic matching model and the actual similarity (obtained according to the second label information) between the second training texts in each group of negative samples.
The mathematical expression of these errors usually uses cross entropy loss function to establish the loss function, and the training process of the semantic matching model is to reduce the values of the above errors as much as possible, and the training process is to continuously optimize the parameter values of the fixed parameters in the initial semantic matching model determined in the above step 201 by a series of mathematical optimization means such as back propagation derivation and gradient descent, and to minimize the calculated values of the above loss function.
Therefore, after the loss function is obtained through calculation, the semantic matching device needs to adjust the fixed parameter value in the initial voice matching model according to the calculated loss function to obtain the final semantic matching model. Specifically, if the calculated loss function has a large function value, for example, a function value larger than a preset value, the fixed parameter value needs to be changed, for example, a weight value of a certain weight needs to be reduced, so that the function value of the loss function calculated according to the adjusted fixed parameter value is reduced.
It should be noted that, in the above steps 203 to 204, after the similarity between the training texts in each set of positive samples and negative samples in the training samples is obtained through the initial semantic matching model calculation, the fixed parameter value in the initial semantic matching model is adjusted once according to the calculated similarity, and in practical application, the above steps 203 to 204 need to be continuously executed in a loop until the adjustment of the fixed parameter value meets a certain stop condition.
Therefore, after executing steps 201 to 204 of the above embodiment, the semantic matching device further needs to determine whether the current adjustment on the fixed parameter value meets a preset stop condition, and if so, the process is ended; if not, returning to execute the steps 203 to 204 for the initial semantic matching model after adjusting the fixed parameter value.
Wherein the preset stop condition includes but is not limited to any one of the following conditions: the difference value between the current adjusted fixed parameter value and the last adjusted fixed parameter value is smaller than a threshold value, namely the adjusted fixed parameter value reaches convergence; and the number of times of adjustment of the fixed parameter value is equal to the preset number of times, and the like.
The following specific application example illustrates the semantic matching method of the present invention, where the method in this embodiment is applied to a search engine system, and the semantic matching device in this embodiment is specifically a search engine background, which may specifically include the following steps, and a schematic diagram is shown in fig. 4, and includes:
step 301, a user inputs a text to be matched at a terminal of a search engine by operating the terminal of the search engine, and initiates a search request for searching the text to be matched to a background of the search engine.
Step 302, a search engine background receives a search request sent by a terminal of a search engine, and obtains a first word feature of a text to be matched in the search request, which may specifically include: q. q.s1,q2,q3......,qn
Step 303, selecting, by the search engine background, one target text from a plurality of preset target texts to obtain a second word feature of the target text, which may specifically include: t is t1,t2,t3......,tm
Step 304, the search engine background determines a first word pair characteristic WC according to the first word characteristic and the second word characteristic, and specifically includes: q. q.s1q2,q2q3,......,qn-1qn,t1t2,t2t3,......,tm-1tm(ii) a And determining a second word Pair characteristic Pair, which specifically comprises: q. q.s1t1,q1t2,......,qitj,......,qntmWherein q isiIs the ith word feature in the first word feature, tjIs the jth word feature in the second word features。
And 305, determining a semantic matching vector by the search engine background according to the first word Pair characteristic WC, the second word Pair characteristic Pair, the first word characteristic and the second word characteristic.
Specifically, the search engine background may combine the first word feature and the second word feature into a new word feature W, which may specifically include: q. q.s1,q2,q3,......,qn,t1,t2,t3,......,tm
Then, the search engine background maps the new word feature W, the first word Pair feature WC and the second word Pair feature Pair into a dimension reduction space vector respectively, specifically: vector quantityAndwherein the vectorThe semantic vector is used for representing the text to be matched and the target text; vector quantityA semantic vector used for representing the text to be matched and the context of the target text; vector quantityThe method is used for expressing semantic matching factors in the text to be matched and the target text.
Finally, the search engine calculates the vector in the backgroundAndaverage vector ofI.e. the semantic matching vector.
Step 306, calling a preset semantic classification model by the background of the search engine, and matching the vector according to the semantic by the semantic classification modelThe similarity between the text to be matched and the target text is directly determined, and if the similarity is greater than a preset value, the search engine background can perform certain feedback to a terminal of a search engine according to the target text; if the similarity is less than or equal to the preset value, the search engine background reselects another target text, and returns to execute step 303 for another target text and the text to be matched.
For example, as shown in fig. 5, the first word feature obtained in step 302 is q1,q2,q3I.e., "delete", "WeChat", and "video"; the second word feature obtained in the above step 303 is t1,t2,t3I.e., "delete", "circle of friends", and "video"; after the new word feature W is obtained, the features can be mapped to the dimension reduction space vectors respectively after the first word Pair feature WC and the second word Pair feature Pair, and the dimension reduction space vectors are averaged to obtain an average vector, namely a semantic matching vector; and finally, inputting the semantic matching vector into a semantic classification model, so as to obtain the similarity between the text to be matched and the target text.
An embodiment of the present invention further provides a semantic matching device, a schematic structural diagram of which is shown in fig. 6, and the semantic matching device specifically includes:
the feature obtaining unit 10 is configured to obtain a first word feature of the text to be matched and a second word feature of the target text.
And the word pair determining unit 11 is configured to determine word pair features of the text to be matched and the target text according to the first word feature and the second word feature acquired by the feature acquiring unit 10.
Specifically, if the first word feature includes n word features, the second word feature includes m word features, and n and m are natural numbers greater than or equal to 1, the word pair determining unit 11 is specifically configured to combine each word feature of the n word features with a next word feature to form a first new word feature, and combine each word feature of the m word features with a next word feature to form a second new word feature; and forming a first word pair characteristic by using the first new word characteristic and the second new word characteristic.
Further, the word pair determining unit 11 is further configured to combine each word feature in the n word features with each word feature in the m word features to form a third new word feature; and forming a second word pair characteristic by using the third new word characteristic.
And a vector determining unit 12, configured to determine a semantic matching vector according to the word pair features, the first word features, and the second word features determined by the word pair determining unit 11, where a format of the semantic matching vector is the same as a format of input data in a preset semantic classification model.
The vector determining unit 12 is specifically configured to perform normalization processing on the word pair features and fourth new word features respectively to form a normalized vector, where the fourth new word features are composed of the first word features and the second word features; and calculating the semantic matching vector according to the normalized vector and a preset mathematical calculation formula.
When the word pair features and the fourth new word features are normalized to form normalized vectors, the vector determination unit 12 may convert the word pair features and the fourth new word features into dimension reduction space vectors; or, respectively encoding the word pair characteristics and the fourth new word characteristics to obtain encoded vectors.
When calculating the semantic matching vector according to the normalized vector and a preset mathematical calculation formula, the vector determination unit 12 uses the normalized vector of the word pair feature and an average vector of the normalized vectors of the fourth new word feature as the semantic matching vector.
And the similarity unit 13 is configured to determine the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model.
As can be seen, in the semantic matching apparatus of this embodiment, the word pair determining unit 11 determines the word pair features of the text to be matched and the target text for the first word feature of the text to be matched and the second word feature of the target text; then, the vector determination unit 12 converts the word pair feature, the first word feature and the second word feature into a semantic matching vector, so that the format of the semantic matching vector is the same as the format of input data in a preset semantic classification model, and the similarity unit 13 can determine the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model. In the process, when the semantic matching vector is determined, the word pair characteristics of the text to be matched and the target text are directly obtained, and the word pair characteristics of the text to be matched and the target text are not required to be obtained through multiplication calculation between word characteristic vectors respectively corresponding to the text to be matched and the target text, so that the time of the multiplication calculation between the vectors is saved, the process of matching the text to be matched and the target text is simplified, the structure for realizing the semantic matching is simplified, and the training process of the structure for realizing the semantic matching is correspondingly simplified.
Referring to fig. 7, in a specific embodiment, the semantic matching device may further include, in addition to the structure shown in fig. 6, the following:
an initial model determining unit 14, configured to determine an initial semantic matching model, where the initial semantic matching model includes a feature module and a semantic classification model, the feature module is configured to determine semantic matching vectors of two texts, and the semantic classification model is configured to determine similarity of the two texts according to the semantic matching vectors determined by the feature module.
A training determination unit 15 for determining training samples, the training samples comprising: the system comprises a plurality of groups of positive samples and a plurality of groups of negative samples, wherein each group of positive samples comprises two first training texts with matched semantemes and first marking information with matched two first training texts, and each group of negative samples comprises two second training texts with unmatched semantemes and second identification information with unmatched two second training texts.
A similarity determining unit 16, configured to determine, through the initial semantic matching model determined by the initial model determining unit 14, similarities between two training texts in each set of positive samples and negative samples respectively;
specifically, the similarity determining unit 16 is specifically configured to determine, by the feature module, semantic matching vectors of two training texts in the certain group of training samples according to word features and word pair features respectively corresponding to the two training texts; determining, by the semantic classification model, a similarity between the two training texts from semantic matching vectors of the two training texts
And an adjusting unit 17, configured to adjust a fixed parameter value in the initial semantic matching model according to the similarity determined by the initial semantic matching model in the similarity determining unit 16, and the first labeling information and the second labeling information, so as to obtain a final semantic matching model. In this way, the similarity unit 13 determines the similarity between the text to be matched and the target text according to the semantic classification model in the final semantic matching model adjusted by the adjusting unit 17.
The adjusting unit 17 is further configured to stop the adjustment of the fixed parameter value if the adjustment of the fixed parameter value satisfies any one of the following stop conditions: the adjustment times of the fixed parameter values are equal to preset times, and the difference value between the currently adjusted fixed parameter value and the fixed parameter value adjusted last time is smaller than a threshold value.
The present invention further provides a server, which is schematically shown in fig. 8, and the server may generate a relatively large difference due to different configurations or performances, and may include one or more Central Processing Units (CPUs) 20 (e.g., one or more processors) and a memory 21, and one or more storage media 22 (e.g., one or more mass storage devices) for storing the application programs 221 or the data 222. Wherein the memory 21 and the storage medium 22 may be a transient storage or a persistent storage. The program stored on the storage medium 22 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, the central processor 20 may be configured to communicate with the storage medium 22 to execute a series of instruction operations in the storage medium 22 on the server.
Specifically, the application program 221 stored in the storage medium 22 includes a semantic matching application program, and the program may include the feature obtaining unit 10, the word pair determining unit 11, the vector determining unit 12, the similarity unit 13, the initial model determining unit 14, the training determining unit 15, the similarity determining unit 16, and the adjusting unit 17 in the above semantic matching apparatus, which will not be described herein again. Still further, the central processor 20 may be configured to communicate with the storage medium 22 to perform a series of operations on the server corresponding to the semantically matched application stored in the storage medium 22.
The server may also include one or more power supplies 23, one or more wireless network interfaces 24, and/or one or more operating systems 223, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, etc.
The steps performed by the semantic matching means in the above method embodiments may be based on the structure of the server shown in fig. 8.
The embodiment of the invention also provides a storage medium, wherein the storage medium stores a plurality of instructions, and the instructions are suitable for being loaded by a processor and executing the semantic matching method executed by the semantic matching device.
The embodiment of the invention also provides a server, which comprises a processor and a storage medium, wherein the processor is used for realizing each instruction; the storage medium is used for storing a plurality of instructions which are used for being loaded by a processor and executing the semantic matching method executed by the semantic matching device.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: read Only Memory (ROM), Random Access Memory (RAM), magnetic or optical disks, and the like.
The semantic matching method, the semantic matching device, and the semantic matching storage medium provided by the embodiments of the present invention are described in detail above, and a specific example is applied in the description to explain the principles and the embodiments of the present invention, and the description of the embodiments is only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (12)

1. A semantic matching method, comprising:
acquiring first word features of a text to be matched and second word features of a target text;
determining word pair characteristics of the text to be matched and the target text according to the first word characteristics and the second word characteristics;
determining a semantic matching vector according to the word pair characteristics, the first word characteristics and the second word characteristics, wherein the format of the semantic matching vector is the same as that of input data in a preset semantic classification model;
and determining the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model.
2. The method according to claim 1, wherein the first word feature includes n word features, the second word feature includes m word features, n and m are natural numbers greater than or equal to 1, and determining the word pair feature of the text to be matched and the target text according to the first word feature and the second word feature specifically includes:
combining each word feature in the n word features with a next word feature to form a first new word feature, and combining each word feature in the m word features with a next word feature to form a second new word feature;
and forming a first word pair characteristic by using the first new word characteristic and the second new word characteristic.
3. The method according to claim 1, wherein the first word feature includes n word features, the second word feature includes m word features, and determining the word pair feature of the text to be matched and the target text according to the first word feature and the second word feature specifically includes:
combining each word feature in the n word features with each word feature in the m word features to form a third new word feature; and forming a second word pair characteristic by using the third new word characteristic.
4. The method of claim 1, wherein determining a semantic matching vector based on the word pair features, the first word features, and the second word features comprises:
respectively carrying out normalization processing on the word pair features and fourth new word features to form a normalized vector, wherein the fourth new word features consist of the first word features and the second word features;
and calculating the semantic matching vector according to the normalized vector and a preset mathematical calculation formula.
5. The method according to claim 4, wherein the normalizing the word pair feature and the fourth new word feature to form a normalized vector comprises:
respectively converting the word pair characteristics and the fourth new word characteristics into dimension reduction space vectors; or,
and respectively coding the word pair characteristics and the fourth new word characteristics to obtain a coded vector.
6. The method according to claim 4, wherein calculating the semantic matching vector according to the normalized vector and a preset mathematical formula comprises:
and respectively taking the normalized vector of the word pair characteristics and the average vector of the normalized vectors of the fourth new word characteristics as the semantic matching vectors.
7. The method of any of claims 1 to 6, further comprising:
determining an initial semantic matching model, wherein the initial semantic matching model comprises a feature module and a semantic classification model, the feature module is used for determining semantic matching vectors of two texts, and the semantic classification model is used for determining the similarity of the two texts according to the semantic matching vectors determined by the feature module;
determining training samples, the training samples comprising: the method comprises the steps that multiple groups of positive samples and multiple groups of negative samples are obtained, each group of positive samples comprises two first training texts with matched semantemes and first marking information with matched two first training texts, and each group of negative samples comprises two second training texts with unmatched semantemes and second identification information with unmatched two second training texts;
respectively determining the similarity between two training texts in each group of positive samples and negative samples through the initial semantic matching model;
and adjusting a fixed parameter value in the initial semantic matching model according to the similarity determined by the initial semantic matching model and the first labeling information and the second labeling information to obtain a final semantic matching model.
8. The method of claim 7, wherein determining a similarity between two training texts in a set of training samples by the initial semantic matching model comprises:
the feature module determines semantic matching vectors of the two training texts according to word features and word pair features respectively corresponding to the two training texts in the certain group of training samples; and the semantic classification model determines the similarity between the two training texts according to the semantic matching vectors of the two training texts.
9. The method of claim 7, wherein the adjustment of the fixed parameter value is stopped if the adjustment of the fixed parameter value satisfies any of the following stop conditions:
the adjustment times of the fixed parameter values are equal to preset times, and the difference value between the currently adjusted fixed parameter value and the fixed parameter value adjusted last time is smaller than a threshold value.
10. A semantic matching apparatus, comprising:
the characteristic acquisition unit is used for acquiring first word characteristics of a text to be matched and second word characteristics of a target text;
the co-occurrence determining unit is used for determining the word pair characteristics of the text to be matched and the target text according to the first word characteristics and the second word characteristics;
the vector determining unit is used for determining a semantic matching vector according to the word pair characteristics, the first word characteristics and the second word characteristics, and the format of the semantic matching vector is the same as that of input data in a preset semantic classification model;
and the similarity unit is used for determining the similarity between the text to be matched and the target text according to the semantic matching vector and the semantic classification model.
11. A storage medium storing a plurality of instructions adapted to be loaded by a processor and to perform the semantic matching method according to any one of claims 1 to 9.
12. A server comprising a processor and a storage medium, the processor configured to implement instructions;
the storage medium is configured to store a plurality of instructions for loading by a processor and executing the semantic matching method according to any one of claims 1 to 9.
CN201910160971.5A 2019-03-04 2019-03-04 Semantic matching method, device and storage medium Active CN109918663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910160971.5A CN109918663B (en) 2019-03-04 2019-03-04 Semantic matching method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910160971.5A CN109918663B (en) 2019-03-04 2019-03-04 Semantic matching method, device and storage medium

Publications (2)

Publication Number Publication Date
CN109918663A true CN109918663A (en) 2019-06-21
CN109918663B CN109918663B (en) 2021-01-08

Family

ID=66962991

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910160971.5A Active CN109918663B (en) 2019-03-04 2019-03-04 Semantic matching method, device and storage medium

Country Status (1)

Country Link
CN (1) CN109918663B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110287910A (en) * 2019-06-28 2019-09-27 北京百度网讯科技有限公司 For obtaining the method and device of information
CN110390107A (en) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 Hereafter relationship detection method, device and computer equipment based on artificial intelligence
CN110929526A (en) * 2019-10-28 2020-03-27 深圳绿米联创科技有限公司 Sample generation method and device and electronic equipment
CN111241298A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Information processing method, apparatus and computer readable storage medium
CN111581952A (en) * 2020-05-20 2020-08-25 长沙理工大学 Large-scale replaceable word bank construction method for natural language information hiding
WO2020258506A1 (en) * 2019-06-27 2020-12-30 平安科技(深圳)有限公司 Text information matching degree detection method and apparatus, computer device and storage medium
CN112241626A (en) * 2020-10-14 2021-01-19 网易(杭州)网络有限公司 Semantic matching and semantic similarity model training method and device
CN112528677A (en) * 2020-12-22 2021-03-19 北京百度网讯科技有限公司 Training method and device of semantic vector extraction model and electronic equipment
CN113011155A (en) * 2021-03-16 2021-06-22 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for text matching
CN113033216A (en) * 2021-03-03 2021-06-25 东软集团股份有限公司 Text preprocessing method and device, storage medium and electronic equipment
CN113536803A (en) * 2020-04-13 2021-10-22 京东方科技集团股份有限公司 Text information processing device and method, computer equipment and readable storage medium
CN114594923A (en) * 2022-02-16 2022-06-07 北京梧桐车联科技有限责任公司 Control method, device and equipment of vehicle-mounted terminal and storage medium
JP2023007367A (en) * 2021-06-30 2023-01-18 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Method of training semantic representation model, apparatus, device, and storage medium
CN115631746A (en) * 2022-12-20 2023-01-20 深圳元象信息科技有限公司 Hot word recognition method and device, computer equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251841A (en) * 2007-05-17 2008-08-27 华东师范大学 Method for establishing and searching feature matrix of Web document based on semantics
CN102332012A (en) * 2011-09-13 2012-01-25 南方报业传媒集团 Chinese text sorting method based on correlation study between sorts
CN104375989A (en) * 2014-12-01 2015-02-25 国家电网公司 Natural language text keyword association network construction system
CN105138523A (en) * 2014-05-30 2015-12-09 富士通株式会社 Method and device for determining semantic keywords in text
CN105389588A (en) * 2015-11-04 2016-03-09 上海交通大学 Multi-semantic-codebook-based image feature representation method
KR20180065184A (en) * 2016-12-07 2018-06-18 동국대학교 산학협력단 Method for measuring semantic fitness between word-color, and apparatus thereof
CN108846126A (en) * 2018-06-29 2018-11-20 北京百度网讯科技有限公司 Generation, question and answer mode polymerization, device and the equipment of related question polymerization model

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101251841A (en) * 2007-05-17 2008-08-27 华东师范大学 Method for establishing and searching feature matrix of Web document based on semantics
CN102332012A (en) * 2011-09-13 2012-01-25 南方报业传媒集团 Chinese text sorting method based on correlation study between sorts
CN105138523A (en) * 2014-05-30 2015-12-09 富士通株式会社 Method and device for determining semantic keywords in text
CN104375989A (en) * 2014-12-01 2015-02-25 国家电网公司 Natural language text keyword association network construction system
CN105389588A (en) * 2015-11-04 2016-03-09 上海交通大学 Multi-semantic-codebook-based image feature representation method
KR20180065184A (en) * 2016-12-07 2018-06-18 동국대학교 산학협력단 Method for measuring semantic fitness between word-color, and apparatus thereof
CN108846126A (en) * 2018-06-29 2018-11-20 北京百度网讯科技有限公司 Generation, question and answer mode polymerization, device and the equipment of related question polymerization model

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020258506A1 (en) * 2019-06-27 2020-12-30 平安科技(深圳)有限公司 Text information matching degree detection method and apparatus, computer device and storage medium
CN110287910A (en) * 2019-06-28 2019-09-27 北京百度网讯科技有限公司 For obtaining the method and device of information
CN110390107A (en) * 2019-07-26 2019-10-29 腾讯科技(深圳)有限公司 Hereafter relationship detection method, device and computer equipment based on artificial intelligence
CN110929526A (en) * 2019-10-28 2020-03-27 深圳绿米联创科技有限公司 Sample generation method and device and electronic equipment
CN110929526B (en) * 2019-10-28 2024-06-04 深圳绿米联创科技有限公司 Sample generation method and device and electronic equipment
CN111241298A (en) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 Information processing method, apparatus and computer readable storage medium
CN111241298B (en) * 2020-01-08 2023-10-10 腾讯科技(深圳)有限公司 Information processing method, apparatus, and computer-readable storage medium
CN113536803A (en) * 2020-04-13 2021-10-22 京东方科技集团股份有限公司 Text information processing device and method, computer equipment and readable storage medium
CN111581952A (en) * 2020-05-20 2020-08-25 长沙理工大学 Large-scale replaceable word bank construction method for natural language information hiding
CN111581952B (en) * 2020-05-20 2023-10-03 长沙理工大学 Large-scale replaceable word library construction method for natural language information hiding
CN112241626B (en) * 2020-10-14 2023-07-07 网易(杭州)网络有限公司 Semantic matching and semantic similarity model training method and device
CN112241626A (en) * 2020-10-14 2021-01-19 网易(杭州)网络有限公司 Semantic matching and semantic similarity model training method and device
CN112528677B (en) * 2020-12-22 2022-03-11 北京百度网讯科技有限公司 Training method and device of semantic vector extraction model and electronic equipment
CN112528677A (en) * 2020-12-22 2021-03-19 北京百度网讯科技有限公司 Training method and device of semantic vector extraction model and electronic equipment
CN113033216B (en) * 2021-03-03 2024-05-28 东软集团股份有限公司 Text preprocessing method and device, storage medium and electronic equipment
CN113033216A (en) * 2021-03-03 2021-06-25 东软集团股份有限公司 Text preprocessing method and device, storage medium and electronic equipment
CN113011155B (en) * 2021-03-16 2023-09-05 北京百度网讯科技有限公司 Method, apparatus, device and storage medium for text matching
CN113011155A (en) * 2021-03-16 2021-06-22 北京百度网讯科技有限公司 Method, apparatus, device, storage medium and program product for text matching
US11989962B2 (en) 2021-03-16 2024-05-21 Beijing Baidu Netcom Science Technology Co., Ltd. Method, apparatus, device, storage medium and program product of performing text matching
JP2023007367A (en) * 2021-06-30 2023-01-18 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Method of training semantic representation model, apparatus, device, and storage medium
JP7358698B2 (en) 2021-06-30 2023-10-11 ベイジン バイドゥ ネットコム サイエンス テクノロジー カンパニー リミテッド Training method, apparatus, device and storage medium for word meaning representation model
CN114594923A (en) * 2022-02-16 2022-06-07 北京梧桐车联科技有限责任公司 Control method, device and equipment of vehicle-mounted terminal and storage medium
CN115631746A (en) * 2022-12-20 2023-01-20 深圳元象信息科技有限公司 Hot word recognition method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN109918663B (en) 2021-01-08

Similar Documents

Publication Publication Date Title
CN109918663B (en) Semantic matching method, device and storage medium
Miao et al. Transformer-based online CTC/attention end-to-end speech recognition architecture
US20230100376A1 (en) Text sentence processing method and apparatus, computer device, and storage medium
WO2021047286A1 (en) Text processing model training method, and text processing method and apparatus
CN110083693B (en) Robot dialogue reply method and device
US9934452B2 (en) Pruning and label selection in hidden Markov model-based OCR
CN113688244A (en) Text classification method, system, device and storage medium based on neural network
CN110750677B (en) Audio and video identification method and system based on artificial intelligence, storage medium and server
CN112818086A (en) Multi-label classification method for acquiring client intention label by robot
CN115617955B (en) Hierarchical prediction model training method, punctuation symbol recovery method and device
CN111368066B (en) Method, apparatus and computer readable storage medium for obtaining dialogue abstract
WO2022083165A1 (en) Transformer-based automatic speech recognition system incorporating time-reduction layer
CN113158687A (en) Semantic disambiguation method and device, storage medium and electronic device
CN110598210A (en) Entity recognition model training method, entity recognition device, entity recognition equipment and medium
CN114091452A (en) Adapter-based transfer learning method, device, equipment and storage medium
CN115422324A (en) Text processing method and equipment
CN115221315A (en) Text processing method and device, and sentence vector model training method and device
CN113095072A (en) Text processing method and device
US11822887B2 (en) Robust name matching with regularized embeddings
CN113177113B (en) Task type dialogue model pre-training method, device, equipment and storage medium
CN112214592B (en) Method for training reply dialogue scoring model, dialogue reply method and device thereof
CN111310892B (en) Language model modeling method based on independent cyclic neural network
CN117725432A (en) Text semantic similarity comparison method, device, equipment and readable storage medium
WO2024076445A1 (en) Transformer-based text encoder for passage retrieval
CN112700766A (en) Training method and device of voice recognition model and voice recognition method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant