CN111832305A - User intention identification method, device, server and medium - Google Patents

User intention identification method, device, server and medium Download PDF

Info

Publication number
CN111832305A
CN111832305A CN202010632031.4A CN202010632031A CN111832305A CN 111832305 A CN111832305 A CN 111832305A CN 202010632031 A CN202010632031 A CN 202010632031A CN 111832305 A CN111832305 A CN 111832305A
Authority
CN
China
Prior art keywords
sentence
sentences
user
nodes
candidate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010632031.4A
Other languages
Chinese (zh)
Other versions
CN111832305B (en
Inventor
申众
张又亮
张崇宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd
Original Assignee
Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd filed Critical Guangzhou Xiaopeng Internet of Vehicle Technology Co Ltd
Priority to CN202010632031.4A priority Critical patent/CN111832305B/en
Publication of CN111832305A publication Critical patent/CN111832305A/en
Application granted granted Critical
Publication of CN111832305B publication Critical patent/CN111832305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/289Phrasal analysis, e.g. finite state techniques or chunking
    • G06F40/295Named entity recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the invention provides a method, a device, a server and a medium for identifying user intentions, wherein the method comprises the following steps: acquiring a user query sentence and acquiring a text graph; selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge to the candidate reference sentence; determining the characteristic vector of the user query statement, and determining the similarity between the candidate reference statement and the user query statement by adopting the characteristic vector of the user query statement and the characteristic vector of the candidate reference statement recorded in the sentence node; selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences; and taking the user intention of the target reference sentence as the user intention of the user query sentence. The user intention is deduced through the sentence nodes and the word nodes of the text graph, so that the recognition accuracy can be improved.

Description

User intention identification method, device, server and medium
Technical Field
The present invention relates to the field of natural language processing, and in particular, to a user intention identifying method, a user intention identifying apparatus, a server, and a computer-readable storage medium.
Background
With the development of intelligent automobiles, users are more and more accustomed to using vehicle-mounted voice systems in daily car usage scenarios. The vehicle-mounted voice system can perform voice intention recognition on the voice of the user to obtain the user intention, and performs corresponding feedback according to the user intention.
The traditional voice intention recognition is usually completed by adopting a data-driven mode through a text classification method, the method usually depends on marked rule data, the requirement on data quality is high, and the recognized text is required to be as regular as possible and accord with the habit of natural expression of people. The voice intention recognition mode needs higher-quality labeled data support, needs continuous supplement of new data, and is high in cost for expanding new intentions.
In an actual scene, voice intention recognition often faces problems of voice recognition error, sentence break error caused by noise mixing and the like, and if noise data is mixed, recall is caused, so that user intention cannot be recognized accurately.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a user intention identifying method, a user intention identifying apparatus, a server, and a computer-readable storage medium that overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for identifying a user intention, including:
acquiring a user query statement and acquiring a text graph; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence node records a reference sentence marked with user intention and a characteristic vector of the reference sentence; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; the candidate reference sentences contain target reference keywords that match query keywords extracted from the user query sentences;
determining a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
determining a feature vector of the user query statement, and determining the similarity between the candidate reference statement and the user query statement by using the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node;
selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
and taking the user intention of the target reference sentence as the user intention of the user query sentence.
Optionally, the text graph is generated by:
acquiring a reference sentence marked with a user intention, and extracting a reference keyword from the reference sentence;
inputting the reference sentence into a pre-training model, and obtaining a feature vector output by the pre-training model;
adopting the reference sentence to construct a corresponding sentence node, and recording the reference sentence, the user intention of the reference sentence and the feature vector in the sentence node;
calculating a weight value representing the importance degree of the reference keyword to the reference sentence;
and constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as the weight values representing the importance degrees of the reference keywords to the reference sentences.
Optionally, the selecting a candidate reference sentence from the reference sentences recorded in the sentence nodes includes:
determining target word nodes recorded with target reference keywords matched with the query keywords from the word nodes of the text graph;
and selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
Optionally, the determining the feature vector of the user query statement includes:
and inputting the candidate reference sentences into a pre-training model, and obtaining the feature vectors output by the pre-training model. Optionally, the selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentence includes:
calculating a first score by adopting the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence;
sorting by using the first score;
and selecting a target reference statement according to the sorting result.
Optionally, the taking the user intention of the target reference sentence as the user intention of the user query sentence includes:
calculating a weight value representing the importance degree of the query keyword to the user query statement;
calculating a second score by adopting the weighted value representing the importance degree of the query keyword to the user query statement and the target reference statement;
judging whether the second score is larger than a preset score threshold value or not;
and if the second score is larger than the preset score threshold value, taking the user intention of the target reference sentence as the user intention of the user query sentence.
The embodiment of the invention also discloses a user intention identification device, which comprises:
the acquisition module is used for acquiring a user query sentence and acquiring a text image; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence node records a reference sentence marked with user intention and a characteristic vector of the reference sentence; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
a candidate reference sentence selection module for selecting a candidate reference sentence from the reference sentences recorded in the sentence nodes, the candidate reference sentence including a target reference keyword matched with a query keyword extracted from the user query sentence;
a weight score determining module, configured to determine a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
a similarity determination module, configured to determine a feature vector of the user query statement, and determine a similarity between the candidate reference statement and the user query statement by using the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node;
the target reference sentence selection module is used for selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
a user intention determining module, configured to use the user intention of the target reference sentence as the user intention of the user query sentence.
Optionally, the text graph is generated by the following modules:
the system comprises a reference information acquisition module, a keyword extraction module and a keyword extraction module, wherein the reference information acquisition module is used for acquiring a reference sentence marked with a user intention and extracting a reference keyword from the reference sentence;
the characteristic vector obtaining module is used for inputting the reference sentence into a pre-training model and obtaining a characteristic vector output by the pre-training model;
the first construction module is used for constructing corresponding sentence nodes by adopting the reference sentences, and recording the reference sentences, the user intentions of the reference sentences and the characteristic vectors in the sentence nodes;
the weighted value calculating module is used for calculating weighted values representing the importance degree of the reference keywords to the reference sentences;
and the second construction module is used for constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as the weight values representing the importance degrees of the reference keywords to the reference sentences.
Optionally, the candidate reference sentence selecting module includes:
the target word node determining submodule is used for determining a target word node recorded with a target reference keyword matched with the query keyword from the word nodes of the text graph;
and the candidate reference sentence selection submodule is used for selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
Optionally, the similarity determining module includes:
and the feature vector obtaining submodule is used for inputting the candidate reference sentences into a pre-training model and obtaining feature vectors output by the pre-training model.
Optionally, the target reference sentence selecting module includes:
a first score calculating sub-module, configured to calculate a first score using the weight score of the candidate reference sentence and a similarity between the candidate reference sentence and the user query sentence;
a sorting submodule for sorting by using the first score;
and the target reference sentence selecting submodule is used for selecting the target reference sentences according to the sorting result.
Optionally, the user intent determination module comprises:
the weight value calculation sub-module is used for calculating a weight value representing the importance degree of the query keyword to the user query statement;
a second score calculating sub-module, configured to calculate a second score using the weight value representing the importance degree of the query keyword to the user query statement and the target reference statement;
the second score judging submodule is used for judging whether the second score is larger than a preset score threshold value or not;
and the user intention determining submodule is used for taking the user intention of the target reference sentence as the user intention of the user query sentence if the second score is larger than the preset score threshold.
The embodiment of the invention also discloses a server, which comprises: a processor, a memory and a computer program stored on the memory and capable of running on the processor, which computer program, when executed by the processor, carries out the steps of any of the user intent recognition methods.
The embodiment of the invention also discloses a computer readable storage medium, wherein a computer program is stored on the computer readable storage medium, and the computer program is used for realizing any step of the user intention identification method when being executed by a processor.
The embodiment of the invention has the following advantages:
according to the embodiment of the invention, the reference sentence, the reference keyword extracted from the reference sentence, the user intention corresponding to the reference sentence, the feature vector of the reference sentence and the weight value representing the importance degree of the reference keyword to the reference sentence are recorded in the form of the text image. Selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge to the candidate reference sentence; determining the similarity between the candidate reference statement and the user query statement by adopting the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node; then, selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences; and finally, taking the user intention of the target reference sentence as the user intention of the user query sentence. By using the characteristics of two dimensions of sentence nodes and word nodes in the text graph, when the target reference sentence is selected from the candidate reference sentences, the accuracy of selecting the target reference sentence can be improved, so that the identification accuracy of the user intention is improved. The cost for constructing the text graph is not high, the reference sentence marked with the user intention can be used, and the method has certain generalization reasoning capability on new data. As long as a new user query sentence can extract a certain entity, the recommendation intention understanding can be carried out according to the existing relation in the text graph. Corresponding sentence nodes and word nodes can be directly added into the text graph when a new intention is expanded, and the expansion is easy.
Drawings
FIG. 1 is a flow chart of the steps of one embodiment of a method for identifying user intent of the present invention;
FIG. 2 is a flowchart of the steps for generating a text diagram in an embodiment of the present invention;
FIG. 3 is a schematic illustration of a text diagram in an embodiment of the invention;
fig. 4 is a block diagram of an embodiment of a user intention recognition apparatus according to the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
In response to an irregular user query sentence obtained by speech recognition, it is difficult to perform the model on-line by a data-driven method in view of an intention to understand the design of the classification model. The embodiment of the invention provides a method for understanding the intention of a user query statement based on words and rule statements marked with user intentions.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a method for identifying a user intention according to the present invention is shown, where the method specifically includes the following steps:
step 101, acquiring a user query statement and acquiring a text graph; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence node records a reference sentence marked with user intention and a characteristic vector of the reference sentence; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence.
The user Speech may be recognized by using automatic Speech recognition asr (automatic Speech recognition) to obtain the user query statement. The user query statement is text data, and noise data exists in the user query statement due to the fact that automatic voice recognition is interfered.
In the embodiment of the invention, the pre-collected reference sentences which accord with the natural expression habits and rules of human beings can be adopted for intention identification to obtain the user intention. And generating a text graph by using the reference sentence marked with the user intention. The text graph is graph structure data and comprises sentence nodes, word nodes and edges connecting the sentence nodes and the corresponding word nodes.
FIG. 2 is a flowchart illustrating steps of generating a text diagram according to an embodiment of the present invention; in an embodiment of the present invention, the step of generating the text diagram may include:
step 201, obtaining a reference sentence marked with a user intention, and extracting a reference keyword from the reference sentence.
For example, the reference sentence includes "turn on the air conditioner", "turn off the air conditioner", "turn on the air conditioner in the front row". The reference keywords can be extracted by means of keyword extraction, entity extraction, rule template extraction and the like, such as 'on', 'air conditioner', 'off' and 'front row'. The reference keywords may also include other words that are important to identify sentence features, such as the "Do" morpheme, which has been treated as stop words in the past, but which is used for question recognition in the embodiment of the present invention to distinguish specific user intentions. Also, adverbs representing negative meanings, such as "don't care", "don't use", etc., may be used as reference keywords to distinguish specific user intentions.
Step 202, inputting the reference sentence into a pre-training model, and obtaining a feature vector output by the pre-training model.
The feature vector of the sentence is usually used for calculating the similarity of the two sentences, and in the embodiment of the invention, the feature vector of the reference sentence and the feature vector of the user query sentence can be used for calculating the similarity of the two sentences, and the similarity between the sentences is used as a basis for selecting the target reference sentence. Wherein the feature vectors of the reference sentences can be obtained by pre-training the model.
In one example, a pre-trained bi-directional encoder token model bert (bidirectional encoder responses from transforms) may be employed to obtain feature vectors for sentences.
The pretraining task for the Bert model has two tasks: masking a few words in a sentence and then predicting the masked words to determine whether the two sentences are in a contextual relationship. The two training tasks are performed simultaneously. After training through a large amount of corpus data, the pre-training model of Bert can well extract the characteristics of words in different sentence context environments.
A sentence can be imported into the Bert model, and the output hidden vectors after passing through the 12-layer encoder are used to approximately represent the sentence feature vectors.
Step 203, constructing corresponding sentence nodes by using the reference sentences, and recording the reference sentences, the user intentions of the reference sentences and the feature vectors in the sentence nodes.
Step 204, calculating a weight value representing the importance degree of the reference keyword to the reference sentence.
The importance of the reference keyword to the reference sentence means that the reference keyword frequently appears in the set of reference sentences, and then this word may be important. Whereas for reference keywords that occur rarely (e.g. only 1 time in the corpus) in the set of reference sentences, a small amount of information, even "noise", is carried. The larger the weight value representing the degree of importance of the reference keyword to the reference sentence is, the larger the meaning representation contribution of the reference keyword to the reference sentence is.
In one example, the weight value representing the importance degree of the reference keyword to the reference sentence may be represented by a term frequency tf (term frequency)/inverse Document frequency idf (inverse Document frequency), or may be calculated by other complex algorithms, so that the relationship accuracy from the reference keyword to the reference sentence is higher
When the weight value is expressed by TF/IDF, TF can be the ratio of the number of times of reference keywords appearing in the reference sentence to the total number of words of the reference sentence; the IDF may be a value obtained by taking a logarithm of a ratio of the total number of reference sentences to a value obtained by adding one to the number of reference sentences containing the reference keyword.
Step 205, building a corresponding word node by using the reference keyword, building an edge between the word node and the corresponding sentence node, and setting a weight value of the edge as a weight value representing the importance degree of the reference keyword to the reference sentence.
Fig. 3 is a schematic diagram of a text diagram in an embodiment of the invention. In the text graph, a corresponding sentence node is constructed by adopting a reference sentence marked with a user intention; as shown in fig. 3, the text diagram includes: sentence nodes of "turn on air conditioner", "turn off air conditioner", "turn on front row air conditioner". The word nodes are constructed by using the reference keywords extracted from the reference sentences, and as shown in fig. 3, the text diagram includes word nodes of "open", "close", "air conditioner", "front row", "do". And establishing an edge between the word node and a sentence node containing the reference keyword of the word node, wherein the edge records a weight value representing the importance degree of the reference keyword to the reference sentence.
102, selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; the candidate reference sentences contain target reference keywords that match query keywords extracted from the user query sentence.
The query keywords can be extracted from the reference sentences, and can be extracted by means of keyword extraction, entity extraction, rule template extraction and the like.
Then, from the reference keywords, a target reference keyword matching the query keyword is searched. The target reference keywords matching the query keyword may be the same words or may be words generalized to have the same meaning.
And finally, selecting candidate reference sentences containing the target reference keywords from the set of reference sentences.
In an embodiment of the present invention, the step of selecting a candidate reference sentence from the reference sentences recorded in the sentence nodes may include: determining target word nodes recorded with target reference keywords matched with the query keywords from the word nodes of the text graph; and selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
Specifically, the query keyword included in the user query sentence may include one or more, and the target reference keyword matched with the query keyword in the text graph may include one or more. The sentence node corresponding to the target word node is a sentence node having an edge connected to the target word node.
In the embodiment of the present invention, the candidate reference sentences may be selected according to the weight values of the target reference keywords in the reference sentences and/or the number of the target reference keywords contained in the reference sentences.
Step 103, determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge to the candidate reference sentence.
And extracting the weight value of the recorded target reference keyword to the candidate reference sentence from the edges between the target reference keyword and the candidate reference sentence.
In the embodiment of the invention, the weight value of the importance degree of the reference keyword to the reference statement is used as a basis for selecting the target reference statement.
The weight score of the candidate reference sentence may be calculated according to the weight value of the target reference keyword included in the candidate reference sentence. For example, the weight score of the candidate reference sentence may be a sum of weight values of the target reference keywords included in the candidate reference sentence.
The higher the weight score, the more closely match the candidate reference sentence with the query keyword extracted from the user query sentence.
And 104, determining the feature vector of the user query statement, and determining the similarity between the candidate reference statement and the user query statement by adopting the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node.
Specifically, the candidate reference sentences may be input into a pre-training model, and feature vectors output by the pre-training model may be obtained. In one example, the pre-trained model may be a Bert model.
The similarity between sentences is represented by the similarity between feature vectors, and a Cosine similarity function Cosine can be used to calculate the similarity between two feature vectors.
For example, the 3 sentences are A, B, C, respectively. If Cosine (the characteristic vector of sentence A and the characteristic vector of sentence B) >0.9, A cannot be explained, and B is similar; however, Cosine (sentence a feature vector, sentence B feature vector > Cosine (sentence a feature vector, sentence C feature vector), with a higher probability, we can consider that a and B are more similar than a and C.
And 105, selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences.
The weight score reflects the degree of matching of the candidate reference sentence with the query keyword extracted from the user query sentence from the dimension of the word, the degree of similarity of the candidate reference sentence with the user query sentence, and the degree of matching of the candidate reference sentence with the user query sentence from the dimension of the sentence. And selecting a target reference sentence from the candidate reference sentences according to the characteristics of two dimensions of the words and the sentences.
In the prior art, the target sentence is usually matched only by the feature of a single dimension, and in the case that the query sentence of the user is an irregular sentence, the target sentence is easy to be matched by mistake.
For example, the user query statement "call air conditioner to twenty-five degrees", and the query keywords extracted therefrom may include: "call," "air conditioner," "on," "twenty-five degrees"; the "air conditioner is turned on to twenty-five degrees" or the "passenger drive air conditioner is turned on to twenty-five degrees" may be recalled according to the query keyword.
If the words are sorted according to their dimensions only, the score of "air conditioner turned on to twenty-five degrees" will be very high because its sentences are short and completely match the query keywords in the user query sentence; and "copilot air conditioner is on to twenty-five degrees" may score behind because the sentence is long. That is, the target reference sentence may be "air conditioner on twenty-five degrees"
If "call" is the result of the conversion after ASR recognition error of "copilot". If the ordering of "copilot air conditioning on to twenty-five degrees" is likely to exceed "air conditioning on to twenty-five degrees" according to the dimensional ordering of words and sentences, that is, the target reference sentence may be "copilot air conditioning on to twenty-five degrees".
In an embodiment of the present invention, the step of selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentence may include: calculating a first score by adopting the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence; sorting by using the first score; and selecting a target reference statement according to the sorting result.
In one example, the first score of the candidate reference sentence may be a weighted sum of the weight score and the similarity. The candidate reference sentences ranked first may be ranked as target reference sentences according to the first score from large to small.
And 106, taking the user intention of the target reference sentence as the user intention of the user query sentence.
And using the target reference sentence marked with the user intention as the user intention of the user query sentence.
In an embodiment of the present invention, the step of using the user intention of the target reference sentence as the user intention of the user query sentence may include the following sub-steps:
a substep S11 of calculating a weight value representing the degree of importance of the query keyword to the user query sentence;
the weight values representing the degree of importance of the query keyword to the user query sentence are calculated in the same manner as the weight values representing the degree of importance of the reference keyword to the reference sentence. For example, the method of TF-IDF is used to represent the weight of a word to sentence.
The weighted value of the importance degree of the keyword to the user query statement can be represented by TF/IDF, wherein TF can be the ratio of the number of times of occurrence of the query keyword in the reference statement to the total number of words of the reference statement, and IDF can be the ratio of the total number of all reference statements in the text image to the number value of the reference statements containing the query keyword after one is added, and the value is logarithmic.
Substep S12, calculating a second score by using the weighted value representing the importance degree of the query keyword to the user query sentence and the target reference sentence;
specifically, a weight value representing the degree of importance of the query keyword to the user query statement and the first score of the target reference statement are used to calculate the second score.
In one example, the second score is a sum of products of the weighted values corresponding to the respective query terms and the first scores of the target reference sentences, respectively. For example, the user query statement is "turn on the air conditioner", and the second score of the user query statement may be: the product of the weight value of "open" and the first score of the target reference sentence, plus the product of the weight value of "air conditioner" and the first score of the target reference sentence.
A substep S13, determining whether the second score is greater than a preset score threshold;
and a substep S14, if the second score is greater than the preset score threshold, taking the user intention of the target reference sentence as the user intention of the user query sentence.
If the second score is less than or equal to the preset score threshold, it may be considered that the target reference sentence is not sufficiently matched with the user query sentence, that is, there is no sentence in the reference sentence that exactly matches with the user query sentence, in which case the corresponding user intention may not be output for the user query sentence.
The user intent for the user query statement is further output only if the second score is greater than a preset score threshold.
According to the embodiment of the invention, the reference sentence, the reference keyword extracted from the reference sentence, the user intention corresponding to the reference sentence, the feature vector of the reference sentence and the weight value representing the importance degree of the reference keyword to the reference sentence are recorded in the form of the text image. Selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; determining a weight score of the candidate reference sentence according to the weight value of the target reference keyword recorded in the edge to the candidate reference sentence; determining the similarity between the candidate reference statement and the user query statement by adopting the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node; then, selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences; and finally, taking the user intention of the target reference sentence as the user intention of the user query sentence. By using the characteristics of two dimensions of sentence nodes and word nodes in the text graph, when the target reference sentence is selected from the candidate reference sentences, the accuracy of selecting the target reference sentence can be improved, so that the identification accuracy of the user intention is improved. The cost for constructing the text graph is not high, the reference sentence marked with the user intention can be used, and the method has certain generalization reasoning capability on new data. As long as a new user query sentence can extract a certain entity, the recommendation intention understanding can be carried out according to the existing relation in the text graph. Corresponding sentence nodes and word nodes can be directly added into the text graph when a new intention is expanded, and the expansion is easy.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
Referring to fig. 4, a block diagram of a user intention identifying apparatus according to an embodiment of the present invention is shown, and may specifically include the following modules:
an obtaining module 401, configured to obtain a user query statement and obtain a text graph; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence node records a reference sentence marked with user intention and a characteristic vector of the reference sentence; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
a candidate reference sentence selecting module 402, configured to select a candidate reference sentence from the reference sentences recorded in the sentence nodes, where the candidate reference sentence includes a target reference keyword matched with a query keyword extracted from the user query sentence;
a weight score determining module 403, configured to determine a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
a similarity determining module 404, configured to determine a feature vector of the user query statement, and determine a similarity between the candidate reference statement and the user query statement by using the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node;
a target reference sentence selecting module 405, configured to select a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentence;
a user intent determination module 406, configured to use the user intent of the target reference sentence as the user intent of the user query sentence.
In an embodiment of the present invention, the text graph is generated by the following modules:
the system comprises a reference information acquisition module, a keyword extraction module and a keyword extraction module, wherein the reference information acquisition module is used for acquiring a reference sentence marked with a user intention and extracting a reference keyword from the reference sentence;
the characteristic vector obtaining module is used for inputting the reference sentence into a pre-training model and obtaining a characteristic vector output by the pre-training model;
the first construction module is used for constructing corresponding sentence nodes by adopting the reference sentences, and recording the reference sentences, the user intentions of the reference sentences and the characteristic vectors in the sentence nodes;
the weighted value calculating module is used for calculating weighted values representing the importance degree of the reference keywords to the reference sentences;
and the second construction module is used for constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as the weight values representing the importance degrees of the reference keywords to the reference sentences.
In an embodiment of the present invention, the candidate reference sentence selecting module 402 may include:
the target word node determining submodule is used for determining a target word node recorded with a target reference keyword matched with the query keyword from the word nodes of the text graph;
and the candidate reference sentence selection submodule is used for selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
In an embodiment of the present invention, the similarity determining module 404 may include:
and the feature vector obtaining submodule is used for inputting the candidate reference sentences into a pre-training model and obtaining feature vectors output by the pre-training model.
In an embodiment of the present invention, the target reference sentence selecting module 405 may include the following sub-modules:
a first score calculating sub-module, configured to calculate a first score using the weight score of the candidate reference sentence and a similarity between the candidate reference sentence and the user query sentence;
a sorting submodule for sorting by using the first score;
and the target reference sentence selecting submodule is used for selecting the target reference sentences according to the sorting result.
In one embodiment of the invention, the user intent determination module 406 may include the following sub-modules:
the weight value calculation sub-module is used for calculating a weight value representing the importance degree of the query keyword to the user query statement;
a second score calculating sub-module, configured to calculate a second score using the weight value representing the importance degree of the query keyword to the user query statement and the target reference statement;
the second score judging submodule is used for judging whether the second score is larger than a preset score threshold value or not;
and the user intention determining submodule is used for taking the user intention of the target reference sentence as the user intention of the user query sentence if the second score is larger than the preset score threshold.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
An embodiment of the present invention further provides a server, including:
the user intention identification method comprises a processor, a memory and a computer program which is stored on the memory and can run on the processor, wherein when the computer program is executed by the processor, each process of the user intention identification method embodiment is realized, the same technical effect can be achieved, and in order to avoid repetition, the description is omitted here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when being executed by a processor, the computer program implements the processes of the above-mentioned embodiment of the method for identifying a user intention, and can achieve the same technical effects, and in order to avoid repetition, the details are not repeated here.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or terminal that comprises the element.
The above detailed description is provided for a user intention identification method, a user intention identification device, a server and a computer readable storage medium, and the specific examples are applied herein to explain the principles and embodiments of the present invention, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A user intention recognition method, comprising:
acquiring a user query statement and acquiring a text graph; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence node records a reference sentence marked with user intention and a characteristic vector of the reference sentence; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
selecting candidate reference sentences from the reference sentences recorded in the sentence nodes; the candidate reference sentences contain target reference keywords that match query keywords extracted from the user query sentences;
determining a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
determining a feature vector of the user query statement, and determining the similarity between the candidate reference statement and the user query statement by using the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node;
selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
and taking the user intention of the target reference sentence as the user intention of the user query sentence.
2. The method of claim 1, wherein the text graph is generated by:
acquiring a reference sentence marked with a user intention, and extracting a reference keyword from the reference sentence;
inputting the reference sentence into a pre-training model, and obtaining a feature vector output by the pre-training model;
adopting the reference sentence to construct a corresponding sentence node, and recording the reference sentence, the user intention of the reference sentence and the feature vector in the sentence node;
calculating a weight value representing the importance degree of the reference keyword to the reference sentence;
and constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as the weight values representing the importance degrees of the reference keywords to the reference sentences.
3. The method of claim 1, wherein the selecting candidate reference sentences from the reference sentences recorded in the sentence nodes comprises:
determining target word nodes recorded with target reference keywords matched with the query keywords from the word nodes of the text graph;
and selecting candidate reference sentences from the reference sentences recorded by the sentence nodes corresponding to the target word nodes.
4. The method of claim 1, wherein the determining the feature vector for the user query statement comprises:
and inputting the candidate reference sentences into a pre-training model, and obtaining the feature vectors output by the pre-training model.
5. The method of claim 1, wherein the selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity of the candidate reference sentences and the user query sentence comprises:
calculating a first score by adopting the weight score of the candidate reference sentence and the similarity between the candidate reference sentence and the user query sentence;
sorting by using the first score;
and selecting a target reference statement according to the sorting result.
6. The method of claim 1, wherein the taking the user intent of the target reference statement as the user intent of the user query statement comprises:
calculating a weight value representing the importance degree of the query keyword to the user query statement;
calculating a second score by adopting the weighted value representing the importance degree of the query keyword to the user query statement and the target reference statement;
judging whether the second score is larger than a preset score threshold value or not;
and if the second score is larger than the preset score threshold value, taking the user intention of the target reference sentence as the user intention of the user query sentence.
7. A user intention recognition apparatus, characterized by comprising:
the acquisition module is used for acquiring a user query sentence and acquiring a text image; the text graph comprises sentence nodes, word nodes and edges connecting the sentence nodes and the word nodes; the sentence node records a reference sentence marked with user intention and a characteristic vector of the reference sentence; the word nodes record reference keywords extracted from the reference sentences; the edge records a weight value representing the importance degree of the reference keyword to the reference sentence;
a candidate reference sentence selection module for selecting a candidate reference sentence from the reference sentences recorded in the sentence nodes, the candidate reference sentence including a target reference keyword matched with a query keyword extracted from the user query sentence;
a weight score determining module, configured to determine a weight score of the candidate reference sentence according to a weight value of the target reference keyword recorded in the edge for the candidate reference sentence;
a similarity determination module, configured to determine a feature vector of the user query statement, and determine a similarity between the candidate reference statement and the user query statement by using the feature vector of the user query statement and the feature vector of the candidate reference statement recorded in the sentence node;
the target reference sentence selection module is used for selecting a target reference sentence from the candidate reference sentences according to the weight scores of the candidate reference sentences and the similarity between the candidate reference sentences and the user query sentences;
a user intention determining module, configured to use the user intention of the target reference sentence as the user intention of the user query sentence.
8. The apparatus of claim 7, wherein the text graph is generated by:
the system comprises a reference information acquisition module, a keyword extraction module and a keyword extraction module, wherein the reference information acquisition module is used for acquiring a reference sentence marked with a user intention and extracting a reference keyword from the reference sentence;
the characteristic vector obtaining module is used for inputting the reference sentence into a pre-training model and obtaining a characteristic vector output by the pre-training model;
the first construction module is used for constructing corresponding sentence nodes by adopting the reference sentences, and recording the reference sentences, the user intentions of the reference sentences and the characteristic vectors in the sentence nodes;
the weighted value calculating module is used for calculating weighted values representing the importance degree of the reference keywords to the reference sentences;
and the second construction module is used for constructing corresponding word nodes by adopting the reference keywords, establishing edges between the word nodes and the corresponding sentence nodes, and setting the weight values of the edges as the weight values representing the importance degrees of the reference keywords to the reference sentences.
9. A server, comprising: processor, memory and computer program stored on the memory and executable on the processor, which computer program, when being executed by the processor, carries out the steps of the user intent recognition method according to any of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the user intent recognition method according to any one of claims 1 to 6.
CN202010632031.4A 2020-07-03 2020-07-03 User intention recognition method, device, server and medium Active CN111832305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010632031.4A CN111832305B (en) 2020-07-03 2020-07-03 User intention recognition method, device, server and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010632031.4A CN111832305B (en) 2020-07-03 2020-07-03 User intention recognition method, device, server and medium

Publications (2)

Publication Number Publication Date
CN111832305A true CN111832305A (en) 2020-10-27
CN111832305B CN111832305B (en) 2023-08-25

Family

ID=72900128

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010632031.4A Active CN111832305B (en) 2020-07-03 2020-07-03 User intention recognition method, device, server and medium

Country Status (1)

Country Link
CN (1) CN111832305B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113157893A (en) * 2021-05-25 2021-07-23 网易(杭州)网络有限公司 Method, medium, apparatus, and computing device for intent recognition in multiple rounds of conversations
CN113284498A (en) * 2021-05-20 2021-08-20 中国工商银行股份有限公司 Client intention identification method and device
CN113822019A (en) * 2021-09-22 2021-12-21 科大讯飞股份有限公司 Text normalization method, related equipment and readable storage medium
CN116244413A (en) * 2022-12-27 2023-06-09 北京百度网讯科技有限公司 New intention determining method, apparatus and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375839A (en) * 2010-08-17 2012-03-14 富士通株式会社 Method and device for acquiring target data set from candidate data set, and translation machine
CN106649423A (en) * 2016-06-23 2017-05-10 新乡学院 Retrieval model calculation method based on content relevance
KR20180101955A (en) * 2017-03-06 2018-09-14 주식회사 수브이 Document scoring method and document searching system
CN109063221A (en) * 2018-11-02 2018-12-21 北京百度网讯科技有限公司 Query intention recognition methods and device based on mixed strategy
CN109284357A (en) * 2018-08-29 2019-01-29 腾讯科技(深圳)有限公司 Interactive method, device, electronic equipment and computer-readable medium
CN109492222A (en) * 2018-10-31 2019-03-19 平安科技(深圳)有限公司 Intension recognizing method, device and computer equipment based on conceptional tree
CN109657232A (en) * 2018-11-16 2019-04-19 北京九狐时代智能科技有限公司 A kind of intension recognizing method
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN109815492A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 A kind of intension recognizing method based on identification model, identification equipment and medium
CN110110199A (en) * 2018-01-09 2019-08-09 北京京东尚科信息技术有限公司 Information output method and device
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110737768A (en) * 2019-10-16 2020-01-31 信雅达系统工程股份有限公司 Text abstract automatic generation method and device based on deep learning and storage medium
CN110837556A (en) * 2019-10-30 2020-02-25 深圳价值在线信息科技股份有限公司 Abstract generation method and device, terminal equipment and storage medium
CN111209480A (en) * 2020-01-09 2020-05-29 上海风秩科技有限公司 Method and device for determining pushed text, computer equipment and medium
CN111259144A (en) * 2020-01-16 2020-06-09 中国平安人寿保险股份有限公司 Multi-model fusion text matching method, device, equipment and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102375839A (en) * 2010-08-17 2012-03-14 富士通株式会社 Method and device for acquiring target data set from candidate data set, and translation machine
CN106649423A (en) * 2016-06-23 2017-05-10 新乡学院 Retrieval model calculation method based on content relevance
KR20180101955A (en) * 2017-03-06 2018-09-14 주식회사 수브이 Document scoring method and document searching system
CN110110199A (en) * 2018-01-09 2019-08-09 北京京东尚科信息技术有限公司 Information output method and device
CN109284357A (en) * 2018-08-29 2019-01-29 腾讯科技(深圳)有限公司 Interactive method, device, electronic equipment and computer-readable medium
CN109492222A (en) * 2018-10-31 2019-03-19 平安科技(深圳)有限公司 Intension recognizing method, device and computer equipment based on conceptional tree
CN109063221A (en) * 2018-11-02 2018-12-21 北京百度网讯科技有限公司 Query intention recognition methods and device based on mixed strategy
CN109657232A (en) * 2018-11-16 2019-04-19 北京九狐时代智能科技有限公司 A kind of intension recognizing method
CN109815492A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 A kind of intension recognizing method based on identification model, identification equipment and medium
CN109800306A (en) * 2019-01-10 2019-05-24 深圳Tcl新技术有限公司 It is intended to analysis method, device, display terminal and computer readable storage medium
CN110674259A (en) * 2019-09-27 2020-01-10 北京百度网讯科技有限公司 Intention understanding method and device
CN110737768A (en) * 2019-10-16 2020-01-31 信雅达系统工程股份有限公司 Text abstract automatic generation method and device based on deep learning and storage medium
CN110837556A (en) * 2019-10-30 2020-02-25 深圳价值在线信息科技股份有限公司 Abstract generation method and device, terminal equipment and storage medium
CN111209480A (en) * 2020-01-09 2020-05-29 上海风秩科技有限公司 Method and device for determining pushed text, computer equipment and medium
CN111259144A (en) * 2020-01-16 2020-06-09 中国平安人寿保险股份有限公司 Multi-model fusion text matching method, device, equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郭哲宏: "网络舆情中热点发现与跟踪系统的设计与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 1, pages 138 - 1484 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113284498A (en) * 2021-05-20 2021-08-20 中国工商银行股份有限公司 Client intention identification method and device
CN113157893A (en) * 2021-05-25 2021-07-23 网易(杭州)网络有限公司 Method, medium, apparatus, and computing device for intent recognition in multiple rounds of conversations
CN113157893B (en) * 2021-05-25 2023-12-15 网易(杭州)网络有限公司 Method, medium, apparatus and computing device for intent recognition in multiple rounds of conversations
CN113822019A (en) * 2021-09-22 2021-12-21 科大讯飞股份有限公司 Text normalization method, related equipment and readable storage medium
CN116244413A (en) * 2022-12-27 2023-06-09 北京百度网讯科技有限公司 New intention determining method, apparatus and storage medium
CN116244413B (en) * 2022-12-27 2023-11-21 北京百度网讯科技有限公司 New intention determining method, apparatus and storage medium

Also Published As

Publication number Publication date
CN111832305B (en) 2023-08-25

Similar Documents

Publication Publication Date Title
CN110196901B (en) Method and device for constructing dialog system, computer equipment and storage medium
CN108829822B (en) Media content recommendation method and device, storage medium and electronic device
CN107329949B (en) Semantic matching method and system
CN111832305B (en) User intention recognition method, device, server and medium
CN108846077B (en) Semantic matching method, device, medium and electronic equipment for question and answer text
CN110110062B (en) Machine intelligent question and answer method and device and electronic equipment
CN109165291B (en) Text matching method and electronic equipment
CN111708873A (en) Intelligent question answering method and device, computer equipment and storage medium
CN111539197A (en) Text matching method and device, computer system and readable storage medium
CN117149989B (en) Training method for large language model, text processing method and device
CN111081220A (en) Vehicle-mounted voice interaction method, full-duplex dialogue system, server and storage medium
CN114169869B (en) Attention mechanism-based post recommendation method and device
CN114818729A (en) Method, device and medium for training semantic recognition model and searching sentence
CN113342958A (en) Question-answer matching method, text matching model training method and related equipment
CN113627194B (en) Information extraction method and device, and communication message classification method and device
CN117520523B (en) Data processing method, device, equipment and storage medium
CN114722176A (en) Intelligent question answering method, device, medium and electronic equipment
CN112667791A (en) Latent event prediction method, device, equipment and storage medium
CN112417174A (en) Data processing method and device
CN115730058A (en) Reasoning question-answering method based on knowledge fusion
CN116150306A (en) Training method of question-answering robot, question-answering method and device
CN113673237A (en) Model training method, intent recognition method, device, electronic equipment and storage medium
CN111191465B (en) Question-answer matching method, device, equipment and storage medium
CN112989001A (en) Question and answer processing method, device, medium and electronic equipment
Elbarougy et al. Continuous audiovisual emotion recognition using feature selection and lstm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant after: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant before: Guangzhou Xiaopeng Internet of vehicles Technology Co.,Ltd.

CB02 Change of applicant information
TA01 Transfer of patent application right

Effective date of registration: 20201218

Address after: Room 1608, 14th floor, 67 North Fourth Ring West Road, Haidian District, Beijing

Applicant after: Beijing Xiaopeng Automobile Co.,Ltd.

Address before: Room 46, room 406, No.1, Yichuang street, Zhongxin knowledge city, Huangpu District, Guangzhou City, Guangdong Province 510000

Applicant before: Guangzhou Xiaopeng Automatic Driving Technology Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant