CN113157892A - User intention processing method and device, computer equipment and storage medium - Google Patents

User intention processing method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN113157892A
CN113157892A CN202110567377.5A CN202110567377A CN113157892A CN 113157892 A CN113157892 A CN 113157892A CN 202110567377 A CN202110567377 A CN 202110567377A CN 113157892 A CN113157892 A CN 113157892A
Authority
CN
China
Prior art keywords
vector
intention
statement
intent
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110567377.5A
Other languages
Chinese (zh)
Inventor
孙泽烨
李炫�
陈思姣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202110567377.5A priority Critical patent/CN113157892A/en
Publication of CN113157892A publication Critical patent/CN113157892A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/332Query formulation
    • G06F16/3329Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/31Indexing; Data structures therefor; Storage structures
    • G06F16/316Indexing structures
    • G06F16/322Trees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/33Querying
    • G06F16/3331Query processing
    • G06F16/334Query execution
    • G06F16/3344Query execution using natural language analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/194Calculation of difference between files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Human Computer Interaction (AREA)
  • Machine Translation (AREA)

Abstract

The application discloses a user intention processing method, a user intention processing device, computer equipment and a storage medium, wherein the user intention processing method comprises the following steps: acquiring a target sentence to be identified; coding the target statement according to a preset coder to generate a first statement vector of the target statement; performing word segmentation processing on the target statement according to a preset word segmentation rule, and generating at least one intention vector to be confirmed, which represents the intention of the target statement, according to a word segmentation result and a first statement vector; inputting the first statement vector and each first intention factor vector into a preset first attention model to generate a second statement vector and a plurality of second intention factor vectors; generating an intent score for the intent vector from the second statement vector and the intent vector; determining at least one real intent of the target sentence from at least one intent vector according to the intent score. The calculated intention score more objectively represents the real intention of the user.

Description

User intention processing method and device, computer equipment and storage medium
Technical Field
The embodiment of the invention relates to the field of natural language, in particular to a user intention processing method, a user intention processing device, computer equipment and a storage medium.
Background
Natural Language Processing (NLP) is an important direction in the fields of computer science and artificial intelligence. It studies various theories and methods that enable efficient communication between humans and computers using natural language. The user intention processing is a science integrating linguistics, computer science and mathematics. The user intention processing is mainly applied to the aspects of machine translation, public opinion monitoring, automatic summarization, viewpoint extraction, text classification, question answering, text semantic comparison, voice recognition, Chinese OCR and the like.
The inventor researches to find that when the user intends to process in the prior art, the corresponding reply information is often determined in a keyword comparison mode. This reply method cannot understand the true intention of the sentence in the natural language, resulting in a low understanding accuracy of the natural language.
Disclosure of Invention
The embodiment of the invention provides a user intention processing method and device, computer equipment and a storage medium, which can understand the real intention of a user sentence.
In order to solve the above technical problem, the embodiment of the present invention adopts a technical solution that: provided is a user intention processing method including:
acquiring a target sentence to be identified;
coding the target statement according to a preset coder to generate a first statement vector of the target statement;
performing word segmentation processing on the target statement according to a preset word segmentation rule, and generating at least one intention vector to be confirmed, which represents the intention of the target statement, according to a word segmentation result and a first statement vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
inputting the first statement vector and each first intention factor vector into a preset first attention model to generate a second statement vector and a plurality of second intention factor vectors;
generating an intent score for the intent vector from the second statement vector and the intent vector;
determining at least one real intent of the target sentence from at least one intent vector according to the intent score.
Optionally, the inputting the first sentence vector and each first intention factor vector into a preset attention model, and the generating a second sentence vector and a plurality of second intention factor vectors includes:
calculating a first attention distribution between each of the first intent factor vectors and the first sentence vector;
normalizing each first attention distribution to generate a first parameter value;
multiplying the first parameter value and a corresponding first intention factor vector to generate a corresponding second intention factor vector;
and splicing the second intention factor vectors to generate the second statement vector.
Optionally, the generating an intent score for the intent vector from the second statement vector and the intent vector comprises:
calculating a second attention distribution between each of the intent vectors and the second statement vector;
normalizing each second attention distribution to generate a second parameter value;
multiplying the second parameter value by the corresponding intention vector to generate a concentration vector corresponding to each intention vector;
and splicing the concentration vectors to generate a third statement vector, and generating statement scores of the intention vectors through the third statement vector.
Optionally, after the stitching the concentration vectors to generate a third statement vector and generate a statement score of each intention vector through the third statement vector, the method includes:
converting each second intention factor vector and each intention vector into a corresponding vector matrix;
multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix;
adding the statement score and the factor score to generate the intent score.
Optionally, the determining at least one real intent of the target sentence from at least one intent vector according to the intent score comprises:
sorting the at least one real intention in a descending order by taking the intention score as a sorting condition;
and determining at least one real intention of the target statement from the at least one intention vector according to a preset screening rule.
Optionally, after determining the true intention of the target sentence from at least one intention vector according to the intention score, the method includes:
searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in the historical data as a root node;
extracting statement parameters in the target statement according to the semantic relation tree;
and inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence.
Optionally, after the inputting the statement parameter into the semantic relation tree to generate the graph query statement of the target statement, the method includes:
searching reply information of the map query statement in a preset reply database according to the map query statement;
and sending the reply information to the user terminal sending the target statement.
In order to solve the above technical problem, an embodiment of the present invention further provides a user intention processing apparatus, including:
the acquisition module is used for acquiring a target sentence to be identified;
the processing module is used for coding the target statement according to a preset coder to generate a first statement vector of the target statement;
the word segmentation module is used for carrying out word segmentation processing on the target statement according to a preset word segmentation rule and generating at least one to-be-confirmed intention vector representing the intention of the target statement according to a word segmentation result and a first statement vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
the attention module is used for inputting the first statement vector and each first intention factor vector into a preset first attention model and generating a second statement vector and a plurality of second intention factor vectors;
a scoring module for generating an intent score for the intent vector from the second statement vector and the intent vector;
an intent module to determine at least one real intent of the target sentence from at least one intent vector according to the intent score.
Optionally, the user intention processing apparatus further comprises:
a first calculation sub-module for calculating a first attention distribution between each of the first intention factor vectors and the first sentence vector;
the first processing submodule is used for carrying out normalization processing on each first attention distribution to generate a first parameter value;
the first generation submodule is used for multiplying the first parameter value and a corresponding first intention factor vector to generate a corresponding second intention factor vector;
and the first splicing submodule is used for splicing the second intention factor vectors to generate the second statement vector.
Optionally, the user intention processing apparatus further comprises:
a second calculation sub-module for calculating a second attention distribution between each of the intention vectors and the second sentence vector;
the second processing submodule is used for carrying out normalization processing on each second attention distribution to generate a second parameter value;
a second generation submodule, configured to multiply the second parameter value and the corresponding intention vector to generate a concentration vector corresponding to each intention vector;
and the second splicing submodule is used for splicing the concentration vectors to generate a third statement vector and generating statement scores of the intention vectors through the third statement vector.
Optionally, the user intention processing apparatus further comprises:
the first conversion sub-module is used for converting each second intention factor vector and each intention vector into a corresponding vector matrix;
the third calculation submodule is used for multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix and generating a factor score of the factor vector matrix;
a first intent submodule to add the statement score and the factor score to generate the intent score.
Optionally, the user intention processing apparatus further comprises:
the first sequencing submodule is used for carrying out descending sequencing on the at least one real intention by taking the intention score as a sequencing condition;
and the first screening submodule is used for determining at least one real intention of the target statement from the at least one intention vector according to a preset screening rule.
Optionally, the user intention processing apparatus further comprises:
the first retrieval submodule is used for searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking a user intention in historical data as a root node;
the first extraction submodule is used for extracting statement parameters in the target statement according to the semantic relation tree;
and the third generation submodule is used for inputting the statement parameters into the semantic relation tree to generate a map query statement of the target statement.
Optionally, the user intention processing apparatus further comprises:
the first query submodule is used for searching reply information of the map query statement in a preset reply database according to the map query statement;
and the first reply submodule is used for sending the reply information to the user terminal which sends the target statement.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device, including a memory and a processor, where the memory stores computer-readable instructions, and the computer-readable instructions, when executed by the processor, cause the processor to perform the steps of the user intention processing method.
In order to solve the above technical problem, an embodiment of the present invention further provides a storage medium storing computer-readable instructions, which, when executed by one or more processors, cause the one or more processors to perform the steps of the user intention processing method.
The embodiment of the invention has the beneficial effects that: the method comprises the steps of coding a target statement to generate a first statement vector, analyzing the target statement, splitting the target statement into at least one intention vector, splicing a plurality of first intention factor vectors to form each intention vector, calculating the association degree between each first intention factor vector and the first statement vector through an attention model, and updating and generating a second statement vector and a second intention factor vector according to the association degree. And generating an intention score of each intention vector through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the relevance between each first statement vector and each first statement vector can be determined by calculating the relevance between the first intention factor vector and the first statement vector, and then the first statement vector and the first intention factor vector are updated according to the relevance, the importance of scoring can be closed to important information or intention representation by the updated second intention factor vector and the second statement vector, so that the real intention of the user is represented more objectively by the calculated intention score, and the accuracy of intention processing of the user is improved.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart illustrating a user intent processing method according to an embodiment of the present application;
FIG. 2 is a flowchart illustrating an embodiment of updating a first intent factor vector and a first statement vector;
FIG. 3 is a flowchart illustrating the generation of a sentence score according to an embodiment of the present application;
FIG. 4 is a schematic flow chart of intent scoring in accordance with one embodiment of the present application;
FIG. 5 is a schematic flow chart illustrating screening for true intent according to an embodiment of the present application;
FIG. 6 is a diagram illustrating a flowchart of a graph query statement according to an embodiment of the present application;
FIG. 7 is a diagram illustrating an example of a semantic relationship tree according to an embodiment of the present application;
FIG. 8 is a flow chart illustrating a process of sending a reply message according to an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a basic structure of a user intention processing apparatus according to an embodiment of the present application;
fig. 10 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, a "terminal" includes both devices that are wireless signal receivers, devices that have only wireless signal receivers without transmit capability, and devices that have receive and transmit hardware, devices that have receive and transmit hardware capable of performing two-way communication over a two-way communication link, as will be understood by those skilled in the art. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (Personal Communications Service), which may combine voice, data processing, facsimile and/or data communication capabilities; a PDA (Personal Digital Assistant), which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (Global Positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "terminal" used herein may also be a communication terminal, a web-enabled terminal, a music/video playing terminal, such as a PDA, an MID (Mobile Internet Device) and/or a Mobile phone with music/video playing function, and may also be a smart tv, a set-top box, etc.
Referring to fig. 1, fig. 1 is a basic flow chart illustrating a user intention processing method according to the present embodiment. As shown in fig. 1, a user intention processing method includes:
s110, acquiring a target sentence to be identified;
the target sentence in the present embodiment means: in the process of man-machine interaction, the words or the words and the sentences converted from the voice information sent by the user. For example, when the user interacts with the target sentence through voice, the voice information of the user is converted into corresponding text information, and the text information is the target sentence. The target sentence can be composed of one sentence of the user, and can also be a whole section or whole text information stated by the user.
In some embodiments, the target sentence can also be various text information, and the text information is generated in a non-human-computer interaction process and can be texts of various literary works, laws, medicine works and the like.
S120, encoding the target statement according to a preset encoder to generate a first statement vector of the target statement;
and inputting the target statement into a preset encoder for compiling to generate a first statement vector capable of representing the complete meaning of the target statement.
The encoder in this embodiment is a double-layer convolutional neural network, which performs two-time convolutional extraction on the target statement to generate a first statement vector. The composition of the encoder is not limited in this regard and in some implementations the encoder can be a single layer convolutional neural network, a three layer convolutional neural network model, a four layer convolutional neural network model, or more layers of convolutional neural network models.
S130, performing word segmentation processing on the target statement according to a preset word segmentation rule, and generating at least one to-be-confirmed intention vector representing the intention of the target statement according to a word segmentation result and a first statement vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
performing word segmentation processing on the target sentence, wherein the word segmentation processing aims at: differentiating the target sentence into a plurality of keywords or key sentences, the keywords or key sentences consisting of: at least one word among Domain, predicate, and target. The subject, predicate or object constituting the keyword or key sentence is defined as a first intention factor of the target sentence. The first intention factors are commonly used words in natural language, so that the vector of each first intention factor can be queried through a pre-established vector library. The vector of each first intention factor can be obtained by query comparison.
In some embodiments, when performing segmentation, the granularity of segmentation is further reduced, and the segmentation result is directly Domain (subject), predicate (predicate) or target (object). Then, the intention vector is generated by combining and splicing each other.
S140, inputting the first statement vector and each first intention factor vector into a preset first attention model to generate a second statement vector and a plurality of second intention factor vectors;
after the first statement vector and the first intention factor vector are generated, in order to identify the association degree between the first intention factor vector and the first statement vector, the association degree between each first intention factor vector and each other first intention factor vector in the first statement vector is calculated through an attention model, the association degree between the intention factor vector and the first statement vector is calculated, the calculation can be carried out through the position between each intention factor, and the association degree between different intention factor vectors can also be calculated through a pre-constructed association database between Chinese characters. And constructing a vector factor of each intention factor according to the calculated association degree between the intention factors, wherein the factor vector generated according to the first attention model is a second intention factor vector. And after the second intention factor vector is generated, sequentially splicing the second intention factors according to the arrangement mode of the vector factors in the target statement to generate an updated second statement vector.
Wherein the first attention model is a question-factor attention module (query-factor attention). The first attention model updates the first factor vector according to the relevance between the factor vectors, and then updates the first statement vector according to the second factor vector in a vector matrix splicing mode to generate a second statement vector. The updated second intention factor vector and the second statement vector comprise the incidence relation among the intention factors, so that the word vector of important information in the statement can be more prominent, and the accuracy of later-stage intention understanding is improved.
S150, generating an intention score of the intention vector according to the second statement vector and the intention vector;
after the second intention factor vector and the second term vector are obtained by updating, the intention score of each intention vector is calculated according to the plurality of second intention factor vectors, the plurality of second term vectors and the plurality of intention vectors.
And calculating a second attention distribution between each intention vector and the second statement vector, wherein the calculation of the second attention distribution is obtained by calculating the vector distance between each intention vector and the second statement vector.
The calculation of the second attention distribution enables to calculate between the intent vector and the second sentence vector: the euclidean distance, the manhattan distance, the chebyshev distance, the minkowski distance, the normalized euclidean distance, the mahalanobis distance, the included angle cosine, the hamming distance, or the jackard distance.
Calculating concentration vectors of the intention vectors according to the calculated second attention distribution, wherein the calculation mode of the concentration vectors is as follows: and multiplying the second attention distribution by the intention vectors, wherein the result of the multiplication is the concentration vector corresponding to each intention vector. After the concentration vectors are obtained through calculation, according to the position information of the intention vectors corresponding to the concentration vectors in the target sentences, the concentration vectors are sequentially spliced to generate third sentence vectors, score calculation is carried out on the third sentence vectors through a full-connection layer, and sentence scores of the second sentence vectors are obtained.
And after the sentence score is obtained through calculation, continuously calculating factor scores between each second intention factor and each corresponding intention vector. The specific calculation method is as follows: and converting each second intention factor to generate a vector matrix, converting the intention vector to generate a corresponding vector matrix, multiplying the second intention factor by the vector matrix corresponding to the intention vector to obtain a factor vector matrix, and performing score calculation on the factor vector matrix through a full connection layer to obtain a factor score.
The calculated sentence score and factor score are added to generate a corresponding intent score for each intent vector. The calculation of the intention score is not limited thereto, and in some embodiments, the intention score can be calculated by means of weighted calculation, for example, counting the intention weight values of the respective intention vectors in different sentence patterns and the weight values of the respective factor vectors in the intention vectors by historical data, and then calculating the intention score according to the weighted calculation.
And S160, determining at least one real intention of the target statement from at least one intention vector according to the intention score.
And after calculating to obtain each intention vector, screening the intention vectors according to a set screening threshold value, screening the intention vectors with the intention scores larger than or equal to the screening threshold value in each intention vector, and determining the intention vectors as the real intention of the target statement. The number of the real intentions of the target sentence is not limited to one, and can be set according to actual requirements according to different application scenes.
In some embodiments, the filtering of the real intent is done by extracting a predetermined number of real intents. And according to the calculated intention scores, performing descending arrangement on the intention vectors, and then extracting the intentions corresponding to the first n intention vectors from the arrangement list as the real intentions of the target sentence.
In some embodiments, the filtering of real intent is dynamically varied, prior to identifying the user's real intent, in order to identify more subtle representations of the meanings entrained in the user's natural language as much as possible. Firstly, emotion recognition is carried out on voice information corresponding to a target statement, and the emotion recognition can be carried out through a neural network model trained to be in a convergence state. When the emotion of the user is identified to be stable and only one emotion is expressed, the real intentions of the user are determined to be 1, when the emotion of the user is identified to express two emotions, the real intentions of the user are determined to be 2, when the emotion of the user is identified to be three emotions, the real intentions of the user are determined to be 3, and so on, and if the emotion of the user is identified to be several, the real intentions of the user are correspondingly determined to be several. The correspondence between the user emotion and the number of real intention screens is not limited to this, and can be arbitrarily set according to the actual needs of a specific application scenario.
In the above embodiment, the target sentence is encoded to generate the first sentence vector, the target sentence is analyzed, the target sentence is split into at least one intention vector, each intention vector is formed by splicing a plurality of first intention factor vectors, the association degree between each first intention factor vector and the first sentence vector is calculated through the attention model, and the second sentence vector and the second intention factor vector are generated according to the update of the association degree. And generating an intention score of each intention vector through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the relevance between each first statement vector and each first statement vector can be determined by calculating the relevance between the first intention factor vector and the first statement vector, and then the first statement vector and the first intention factor vector are updated according to the relevance, the importance of scoring can be closed to important information or intention representation by the updated second intention factor vector and the second statement vector, so that the real intention of the user is represented more objectively by the calculated intention score, and the accuracy of intention processing of the user is improved.
In some embodiments, the first intention factor vector and the first statement vector need to be updated by the similarity between the first intention factor vector and the first statement vector. Referring to fig. 2, fig. 2 is a schematic flow chart illustrating updating of the first intention factor vector and the first statement vector according to the present embodiment.
As shown in fig. 2, S140 includes:
s141, calculating a first attention distribution between each first intention factor vector and the first statement vector;
the first attention distribution between the first intention factor vector and the first statement vector is calculated, the similarity between the intention factor vector and the first statement vector can be calculated through the positions of the intention factors, and the similarity between different intention factor vectors can also be calculated through a pre-constructed association database between Chinese characters. By the method, a first attention distribution between the first intention factor vector and the first sentence vector is calculated.
S142, normalizing each first attention distribution to generate a first parameter value;
after the first attention distribution is calculated, since the distance lengths between the vectors are different, the numerical span of the first attention distribution is large, and therefore, it is necessary to perform normalization processing on the calculated first attention distribution. In the normalization process, a linear function normalization method or a 0-mean normalization method can be adopted for normalization.
S143, multiplying the first parameter value and the corresponding first intention factor vector to generate a corresponding second intention factor vector;
and multiplying the first parameter values of the calculated first attention distributions by the corresponding first intention factor vectors, wherein the step takes the calculated similarity as weight, and the similarity is solidified into a second intention factor vector through multiplication operation, so that the weight value of important characters in the target sentence is highlighted, the weight value of conventional characters is reduced, the feature vector hierarchy corresponding to the target sentence is clearer, and the accuracy of intention identification is improved.
And a vector obtained by multiplying the first parameter value and the corresponding first intention factor vector is defined as a second intention factor vector.
And S144, splicing the second intention factor vectors to generate the second statement vector.
And after the second intention factor vector is generated, sequentially splicing the second intention factors according to the arrangement mode of the vector factors in the target statement to generate an updated second statement vector.
In some embodiments, it is necessary to update the third sentence vector by the similarity between the intention vector and the second sentence vector, and generate a sentence score of the third sentence vector. Referring to fig. 3, fig. 3 is a schematic flow chart illustrating the generation of sentence scores according to the present embodiment.
As shown in fig. 3, S150 includes:
s151, calculating a second attention distribution between each intention vector and the second statement vector;
and calculating a second attention distribution between each intention vector and the second statement vector, wherein the calculation of the second attention distribution is obtained by calculating the vector distance between each intention vector and the second statement vector.
The calculation of the second attention distribution enables to calculate between the intent vector and the second sentence vector: the euclidean distance, the manhattan distance, the chebyshev distance, the minkowski distance, the normalized euclidean distance, the mahalanobis distance, the included angle cosine, the hamming distance, or the jackard distance.
S152, normalizing each second attention distribution to generate a second parameter value;
after the second attention distribution is calculated, since the distance lengths between the vectors are different, the numerical span of the second attention distribution is large, and therefore, it is necessary to perform normalization processing on the calculated second attention distribution. In the normalization process, a linear function normalization method or a 0-mean normalization method can be adopted for normalization.
S153, multiplying the second parameter value and the corresponding intention vector to generate a concentration vector corresponding to each intention vector;
and a step of multiplying the calculated second parameter values of the second attention distributions by the corresponding intention vectors, and solidifying the calculated similarity into the updated intention vectors by a multiplication operation using the calculated similarity as a weight, thereby calculating the relative attention degrees of the intention vectors.
And defining a vector obtained by multiplying the second parameter value and the corresponding intention vector as a concentration vector.
And S154, splicing the concentration vectors to generate a third statement vector, and generating statement scores of the intention vectors through the third statement vector.
After the concentration vectors are obtained through calculation, according to the arrangement relation among the intention vectors corresponding to the concentration vectors, the concentration vectors are sequentially spliced to generate a third statement vector.
And after a third statement vector is generated, calculating the characteristic distance between each intention vector and the third statement vector, mapping the calculated characteristic distance into a 1 x 1 full-connection layer, performing plane expansion on the characteristic distance, and finally classifying and scoring the characteristic distance through a scoring mechanism of machine learning to obtain the statement score of each intention vector.
By calculating the score of each intention score in the third sentence vector, the importance of each intention vector to the entire target sentence expression can be calculated, and the sentence score of each intention vector can be obtained by macro-level calculation.
In some embodiments, it may be desirable to calculate a factor score in the intent vectors and to sum the intent scores of each intent vector based on the factor scores and the sentence scores. Referring to fig. 4, fig. 4 is a schematic flow chart of intention scoring according to the present embodiment.
As shown in fig. 4, S154 includes:
s155, converting each second intention factor vector and each intention vector into a corresponding vector matrix;
and converting each second intention factor vector into a corresponding vector matrix, wherein each second intention factor vector is composed of characters corresponding to intention factors, and counting the vectors corresponding to each character in each second intention factor vector to form the vector matrix of each second intention factor vector in the target sentence.
Each intention vector is composed of one or more second intention factor vectors, and the second intention factor vectors composing the intention vectors are arranged in sequence to generate a corresponding vector matrix.
And converting to obtain a vector matrix of each intention vector, and converting each second intention factor vector in each intention vector into a corresponding vector matrix.
S156, multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix;
and multiplying each second intention factor vector and the vector matrix of the corresponding intention vector to generate a factor vector matrix. And mapping the calculated factor vector matrix to a 1 x 1 full-connection layer, performing plane expansion on the feature distance, and finally performing finger calculation on the expanded one-dimensional features through a scoring mechanism of machine learning to obtain the factor score of each intention vector.
And S157, adding the sentence score and the factor score to generate the intention score.
And adding the sentence score and the factor score of each calculated intention vector to obtain the intention score of each intention vector, and screening the real intention of the target sentence according to the intention scores.
The calculation of the intention score is not limited to this, and the intention score can be calculated in a weighted manner in some embodiments, depending on the specific application scenario. For example, during calculation, the text length of the target sentence is counted, and the longer the text length of the target sentence is, the larger the score weight of the corresponding sentence is, and the smaller the score weight of the corresponding factor is; otherwise, the smaller the weight of the sentence score is, the larger the weight of the corresponding factor score is. In one embodiment, the text character length of the target sentence is compared with a preset length threshold, and when the comparison result is greater than 1, the weight of the sentence score is greater than 0.5; when the comparison result is less than 1, the weight of the factor score is greater than 0.5. The specific conversion ratio between the weight and the character ratio can be set according to the actual scene needs.
Calculating the factor score measures the score condition of each intention vector from the microscopic view, and then, combining the sentence score of each intention vector at the macroscopic level and the factor score at the microscopic level, calculating to obtain the comprehensive score of the intention vector: and (4) scoring the intention. The intention scores are mixed with the microscopic scores and the macroscopic scores of the intention vectors, so that each intention score can be more objective, and the real intention of the target sentence can be screened out.
In some embodiments, after the intention score of each intention vector is obtained, a target vector aiming at the real intention of the target sentence needs to be screened out according to the intention score. Referring to fig. 5, fig. 5 is a schematic flow chart illustrating the real intent screening according to the present embodiment.
As shown in fig. 5, S160 includes:
s161, taking the intention scores as sorting conditions, and sorting the at least one real intention in a descending order;
after the intention score of each intention vector is obtained through calculation, the recognized intention vectors are sorted according to the intention scores, and the sorting mode is descending sorting.
S162, determining at least one real intention of the target statement from the at least one intention vector according to a preset screening rule.
And screening the real intention of the target sentence in the sorted list according to a preset screening rule. Specifically, the screening rule is as follows: and screening the intention vectors of the top three in the sorted list, and determining the intention characters corresponding to the screened intention vectors as the real intention of the target statement. The screening number of the real intentions in the screening rule can be set according to specific requirements, including (without limitation): top1, top2, top4 or more.
In some embodiments, after the real intention of the target sentence is determined, the atlas query sentence needs to be correspondingly generated according to the real intention representation. Referring to fig. 6, fig. 6 is a schematic flow chart of a graph query statement according to the present embodiment.
As shown in fig. 6, S160 includes:
s171, with the real intention as a retrieval condition, searching a semantic relation tree corresponding to the real intention in a preset sample database, wherein the semantic relation tree is a semantic expression topology sample constructed by taking a user intention as a root node in historical data;
after the real intention of the target sentence is obtained through screening, the intention character representing the real intention is used as a retrieval condition, and a semantic key tree corresponding to the real intention is searched in a preset sample database.
The semantic relation tree is a data structure that is pre-built in the sample database for identifying various types of user intents. Specifically, various user intentions acquired from historical data are used as root nodes, and the user intentions are split into words: predicate, operator, type, and attribute. The splitting of the user intention is not limited to the four categories, and in practical application, the user intention can be split according to the actual requirements of an application scene, for example, the categories such as reducing the types of the user intention or adding the subject can be divided.
In the user intention splitting process, the user intention is split into tree topology diagrams. Referring to fig. 7, fig. 7 is a schematic diagram illustrating an example of a semantic relation tree according to the present embodiment. As shown in fig. 7, the user intention is taken as a root node, and the predicate, operator, type, and attribute are taken as children of the root node. The predicate can document the primary intent of the user's intent, for example: insurance application, insurance reimbursement, claims settlement or purchase information, etc. Operators are used to identify logical relationships between predicates in an intent representation, e.g., information such as juxtaposition, order, condition, greater than, equal to, belonging to, or containing. Type refers to the condition of predicate execution in user intent, for example: amount, age, product, occupation, or disease, etc. Attributes refer to the type of user intent, such as: subject, object, age, occupation, weight, and payment period.
The semantic relation tree is constructed through a Biaffine model, the Biaffine model divides various user intentions acquired from historical data into words, then the probability that the dependency relationship exists between the two words is directly predicted through a neural network, and finally different words are classified through the probability of the dependency relationship between the words in the user intentions to form the semantic relation tree.
S172, extracting statement parameters in the target statement according to the semantic relation tree;
and after matching the semantic relation tree of the real intention, acquiring sentence parameters corresponding to the words in the leaf nodes in the semantic relation tree. For example, the target statement is: i want to buy the health risk for two years, wherein the 'buying health risk' is defined as the real intention of the user, the health risk becomes a leaf node of a semantic relation tree corresponding to the real intention of the user, the leaf node only comprises the noun of the health risk and has no attached quantifier, and at the moment, the 'two years' is extracted as a statement parameter which is matched with the 'health risk' in the target statement. When the user intentions are multiple, the extraction of the sentence parameters is completed by respectively extracting the parameter values corresponding to each leaf node.
S173, inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence.
And respectively inputting the extracted sentence parameters into leaf nodes corresponding to the semantic relation tree to generate a map query sentence of the target sentence. And the map query statement links leaf nodes in the semantic relation tree in a topological form to generate a cascaded map query statement.
In some embodiments, when the reply information is retrieved, the reply information needs to be transmitted back to the user terminal. Referring to fig. 8, fig. 8 is a schematic diagram illustrating a sending process of the reply message according to the embodiment.
S181, searching reply information of the atlas query statement in a preset reply database according to the atlas query statement;
when searching is carried out, answer inquiry is carried out in sequence according to the link sequence of the leaf nodes, the searching range of the matched answers is continuously reduced, and the answers are used as reply information to reply when the searched answers can meet the matching requirements of all the leaf nodes. For example, when there are two leaf nodes in the semantic relation tree, the two leaf nodes form a cascade graph query statement, when the leaf node of the first level performs answer search, 20 groups of adapted answers are searched, and when the leaf node of the second level performs answer adaptation, the adaptation is performed in the 20 groups of answers that have been recalled. The retrieval mode can avoid the defect that global retrieval matching is required for each user intention in the traditional multi-intention identification, not only can improve the accuracy of reply information, but also can improve the retrieval efficiency and save calculation resources.
And S182, sending the reply information to the user terminal sending the target statement.
And after retrieving the reply information, sending the reply information to the user terminal of the target statement. In some embodiments, after a certain user terminal obtains the reply information of the target sentence, the server stores the target sentence and the user terminal in association. When the server receives the same target statement again, terminal adaptation is carried out, inquiry information is sent to the user terminal which obtains the reply information in advance, the inquiry information can prompt whether the user becomes a question consultation official, paid consultation can be carried out under partial conditions to obtain permission of the previous user, after the permission of the previous user terminal is obtained, a point-to-point communication channel between the repeated question consultation terminal and the user terminal is established, point-to-point communication between the user terminal and the repeated question consultation terminal is achieved, and then the corresponding reply information is sent to the repeated question consultation terminal by the user terminal. Therefore, a communication bridge between the existing user and the consultation user is established, so that the new user can obtain the most appropriate guidance and suggestion, and meanwhile, customer service resources are saved.
Referring to fig. 9, fig. 9 is a schematic diagram of a basic structure of the user intention processing device according to the embodiment.
As shown in fig. 9, a user intention processing apparatus includes: an acquisition module 1100, a processing module 1200, a segmentation module 1300, an attention module 1400, a scoring module 1500, and an intent module 1600. The acquisition module is used for acquiring a target sentence to be identified; the processing module is used for coding the target statement according to a preset coder to generate a first statement vector of the target statement; the word segmentation module is used for carrying out word segmentation processing on the target statement according to a preset word segmentation rule and generating at least one intention vector to be confirmed, which represents the intention of the target statement, according to a word segmentation result and a first statement vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors; the attention module is used for inputting the first statement vector and each first intention factor vector into a preset first attention model and generating a second statement vector and a plurality of second intention factor vectors; the scoring module is used for generating an intention score of the intention vector according to the second statement vector and the intention vector; the intent module is to determine at least one real intent of the target sentence from at least one intent vector according to the intent score.
The user intention processing device encodes a target statement to generate a first statement vector, analyzes the target statement, divides the target statement into at least one intention vector, each intention vector is formed by splicing a plurality of first intention factor vectors, calculates the association degree between each first intention factor vector and the first statement vector through an attention model, and generates a second statement vector and a second intention factor vector according to the update of the association degree. And generating an intention score of each intention vector through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the relevance between each first statement vector and each first statement vector can be determined by calculating the relevance between the first intention factor vector and the first statement vector, and then the first statement vector and the first intention factor vector are updated according to the relevance, the importance of scoring can be closed to important information or intention representation by the updated second intention factor vector and the second statement vector, so that the real intention of the user is represented more objectively by the calculated intention score, and the accuracy of intention processing of the user is improved.
Optionally, the user intention processing apparatus further comprises: the device comprises a first calculation submodule, a first processing submodule, a first generation submodule and a first splicing submodule. Wherein the first calculation submodule is used for calculating a first attention distribution between each first intention factor vector and the first statement vector; the first processing submodule is used for carrying out normalization processing on each first attention distribution to generate a first parameter value; the first generation submodule is used for multiplying the first parameter value and a corresponding first intention factor vector to generate a corresponding second intention factor vector; the first splicing submodule is used for splicing the second intention factor vectors to generate the second statement vector.
Optionally, the user intention processing apparatus further comprises: the second calculation submodule, the second processing submodule, the second generation submodule and the second splicing submodule. Wherein the second calculation submodule is used for calculating a second attention distribution between each intention vector and the second statement vector; the second processing submodule is used for carrying out normalization processing on each second attention distribution to generate a second parameter value; the second generation submodule is used for multiplying the second parameter value and the corresponding intention vector to generate a concentration vector corresponding to each intention vector; and the second splicing submodule is used for splicing the concentration vectors to generate a third statement vector and generating statement scores of the intention vectors through the third statement vector.
Optionally, the user intention processing apparatus further comprises: a first converter sub-module, a third calculation sub-module, and a first intent sub-module. The first conversion sub-module is used for converting each second intention factor vector and each intention vector into a corresponding vector matrix; the third calculation submodule is used for multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix and generating a factor score of the factor vector matrix; a first intent submodule is to add the statement score and the factor score to generate the intent score.
Optionally, the user intention processing apparatus further comprises: a first ordering submodule and a first screening submodule. The first ranking submodule is used for performing descending ranking on the at least one real intention by taking the intention score as a ranking condition; the first screening submodule is used for determining at least one real intention of the target statement from the at least one intention vector according to a preset screening rule.
Optionally, the user intention processing apparatus further comprises: the device comprises a first retrieval submodule, a first extraction submodule and a third generation submodule. The first retrieval submodule is used for searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking a user intention in historical data as a root node; the first extraction submodule is used for extracting statement parameters in the target statement according to the semantic relation tree; and the third generation submodule is used for inputting the statement parameters into the semantic relation tree to generate a map query statement of the target statement.
Optionally, the user intention processing apparatus further comprises: a first query submodule and a first reply submodule. The first query submodule is used for searching reply information of the map query statement in a preset reply database according to the map query statement; and the first reply submodule is used for sending the reply information to the user terminal which sends the target statement.
In order to solve the above technical problem, an embodiment of the present invention further provides a computer device. Referring to fig. 10, fig. 10 is a block diagram of a basic structure of a computer device according to the present embodiment.
As shown in fig. 10, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. Wherein the non-volatile storage medium of the computer device stores an operating system, a database, and computer readable instructions, the database may store a sequence of control information, and the computer readable instructions, when executed by the processor, may cause the processor to implement a user intent processing method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored therein computer-readable instructions that, when executed by the processor, may cause the processor to perform a user intent processing method. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 10 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of the acquiring module 1100, the processing module 1200, the word segmentation module 1300, the attention module 1400, the scoring module 1500, and the intention module 1600 in fig. 9, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in this embodiment stores program codes and data required for executing all the sub-modules in the face image key point detection device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
The computer equipment encodes a target statement to generate a first statement vector, analyzes the target statement, divides the target statement into at least one intention vector, each intention vector is formed by splicing a plurality of first intention factor vectors, calculates the association degree between each first intention factor vector and the first statement vector through an attention model, and generates a second statement vector and a second intention factor vector according to the update of the association degree. And generating an intention score of each intention vector through the second intention factor vector, the second sentence vector and the intention vector, and further obtaining the real intention of the target sentence. Because the relevance between each first statement vector and each first statement vector can be determined by calculating the relevance between the first intention factor vector and the first statement vector, and then the first statement vector and the first intention factor vector are updated according to the relevance, the importance of scoring can be closed to important information or intention representation by the updated second intention factor vector and the second statement vector, so that the real intention of the user is represented more objectively by the calculated intention score, and the accuracy of intention processing of the user is improved.
The present invention also provides a storage medium storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of any of the above-described embodiments of the user intent processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.

Claims (10)

1. A user intention processing method, comprising:
acquiring a target sentence to be identified;
coding the target statement according to a preset coder to generate a first statement vector of the target statement;
performing word segmentation processing on the target statement according to a preset word segmentation rule, and generating at least one intention vector to be confirmed, which represents the intention of the target statement, according to a word segmentation result and a first statement vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
inputting the first statement vector and each first intention factor vector into a preset first attention model to generate a second statement vector and a plurality of second intention factor vectors;
generating an intent score for the intent vector from the second statement vector and the intent vector;
determining at least one real intent of the target sentence from at least one intent vector according to the intent score.
2. The method of claim 1, wherein the inputting the first sentence vector and each of the first intention factor vectors into a preset attention model, and the generating a second sentence vector and a plurality of second intention factor vectors comprises:
calculating a first attention distribution between each of the first intent factor vectors and the first sentence vector;
normalizing each first attention distribution to generate a first parameter value;
multiplying the first parameter value and a corresponding first intention factor vector to generate a corresponding second intention factor vector;
and splicing the second intention factor vectors to generate the second statement vector.
3. The method of claim 1, wherein generating the intent score for the intent vector from the second statement vector and the intent vector comprises:
calculating a second attention distribution between each of the intent vectors and the second statement vector;
normalizing each second attention distribution to generate a second parameter value;
multiplying the second parameter value by the corresponding intention vector to generate a concentration vector corresponding to each intention vector;
and splicing the concentration vectors to generate a third statement vector, and generating statement scores of the intention vectors through the third statement vector.
4. The method of claim 3, wherein the generating a sentence score for each intent vector by stitching the focus vectors to generate a third sentence vector comprises:
converting each second intention factor vector and each intention vector into a corresponding vector matrix;
multiplying each second intention factor vector by a vector matrix of a corresponding intention vector to generate a factor vector matrix, and generating a factor score of the factor vector matrix;
adding the statement score and the factor score to generate the intent score.
5. The method of claim 1, wherein determining at least one real intent of the target sentence from at least one intent vector according to the intent score comprises:
sorting the at least one real intention in a descending order by taking the intention score as a sorting condition;
and determining at least one real intention of the target statement from the at least one intention vector according to a preset screening rule.
6. The method of claim 1, wherein the determining the true intent of the target sentence from at least one intent vector according to the intent score comprises:
searching a semantic relation tree corresponding to the real intention in a preset sample database by taking the real intention as a retrieval condition, wherein the semantic relation tree is a semantic expression topology sample constructed by taking the user intention in the historical data as a root node;
extracting statement parameters in the target statement according to the semantic relation tree;
and inputting the sentence parameters into the semantic relation tree to generate a map query sentence of the target sentence.
7. The method according to claim 6, wherein the inputting the sentence parameters into the semantic relation tree after generating the graph query sentence of the target sentence comprises:
searching reply information of the map query statement in a preset reply database according to the map query statement;
and sending the reply information to the user terminal sending the target statement.
8. A user intention processing apparatus, characterized by comprising:
the acquisition module is used for acquiring a target sentence to be identified;
the processing module is used for coding the target statement according to a preset coder to generate a first statement vector of the target statement;
the word segmentation module is used for carrying out word segmentation processing on the target statement according to a preset word segmentation rule and generating at least one to-be-confirmed intention vector representing the intention of the target statement according to a word segmentation result and a first statement vector, wherein the intention vector is formed by splicing a plurality of first intention factor vectors;
the attention module is used for inputting the first statement vector and each first intention factor vector into a preset first attention model and generating a second statement vector and a plurality of second intention factor vectors;
a scoring module for generating an intent score for the intent vector from the second statement vector and the intent vector;
an intent module to determine at least one real intent of the target sentence from at least one intent vector according to the intent score.
9. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by the processor, cause the processor to perform the steps of the user intent processing method of any of claims 1 to 7.
10. A storage medium storing computer readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of the user intent processing method of any of claims 1 to 7.
CN202110567377.5A 2021-05-24 2021-05-24 User intention processing method and device, computer equipment and storage medium Pending CN113157892A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110567377.5A CN113157892A (en) 2021-05-24 2021-05-24 User intention processing method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110567377.5A CN113157892A (en) 2021-05-24 2021-05-24 User intention processing method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113157892A true CN113157892A (en) 2021-07-23

Family

ID=76877820

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110567377.5A Pending CN113157892A (en) 2021-05-24 2021-05-24 User intention processing method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113157892A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722457A (en) * 2021-08-11 2021-11-30 北京零秒科技有限公司 Intention recognition method and device, storage medium, and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388793A (en) * 2017-08-03 2019-02-26 阿里巴巴集团控股有限公司 Entity mask method, intension recognizing method and corresponding intrument, computer storage medium
CN109815492A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 A kind of intension recognizing method based on identification model, identification equipment and medium
CN111125331A (en) * 2019-12-20 2020-05-08 京东方科技集团股份有限公司 Semantic recognition method and device, electronic equipment and computer-readable storage medium
WO2020140373A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Intention recognition method, recognition device and computer-readable storage medium
KR20210038860A (en) * 2020-06-29 2021-04-08 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Intent recommendation method, apparatus, device and storage medium
CN112699679A (en) * 2021-03-25 2021-04-23 北京沃丰时代数据科技有限公司 Emotion recognition method and device, electronic equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109388793A (en) * 2017-08-03 2019-02-26 阿里巴巴集团控股有限公司 Entity mask method, intension recognizing method and corresponding intrument, computer storage medium
CN109815492A (en) * 2019-01-04 2019-05-28 平安科技(深圳)有限公司 A kind of intension recognizing method based on identification model, identification equipment and medium
WO2020140373A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Intention recognition method, recognition device and computer-readable storage medium
CN111125331A (en) * 2019-12-20 2020-05-08 京东方科技集团股份有限公司 Semantic recognition method and device, electronic equipment and computer-readable storage medium
KR20210038860A (en) * 2020-06-29 2021-04-08 베이징 바이두 넷컴 사이언스 테크놀로지 컴퍼니 리미티드 Intent recommendation method, apparatus, device and storage medium
CN112699679A (en) * 2021-03-25 2021-04-23 北京沃丰时代数据科技有限公司 Emotion recognition method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113722457A (en) * 2021-08-11 2021-11-30 北京零秒科技有限公司 Intention recognition method and device, storage medium, and electronic device

Similar Documents

Publication Publication Date Title
CN111753060B (en) Information retrieval method, apparatus, device and computer readable storage medium
CN112131350B (en) Text label determining method, device, terminal and readable storage medium
CN108932342A (en) A kind of method of semantic matches, the learning method of model and server
CN112800170A (en) Question matching method and device and question reply method and device
CN113282711B (en) Internet of vehicles text matching method and device, electronic equipment and storage medium
CN113159187B (en) Classification model training method and device and target text determining method and device
CN113282729B (en) Knowledge graph-based question and answer method and device
CN115827819A (en) Intelligent question and answer processing method and device, electronic equipment and storage medium
Liu et al. Open intent discovery through unsupervised semantic clustering and dependency parsing
CN114997288A (en) Design resource association method
CN116975271A (en) Text relevance determining method, device, computer equipment and storage medium
CN114298055B (en) Retrieval method and device based on multilevel semantic matching, computer equipment and storage medium
CN114676346A (en) News event processing method and device, computer equipment and storage medium
Zhen et al. The research of convolutional neural network based on integrated classification in question classification
CN111507108B (en) Alias generation method and device, electronic equipment and computer readable storage medium
CN113157892A (en) User intention processing method and device, computer equipment and storage medium
Rogushina et al. Use of ontologies for metadata records analysis in big data
CN115186085A (en) Reply content processing method and interaction method of media content interaction content
CN115203206A (en) Data content searching method and device, computer equipment and readable storage medium
US11983205B2 (en) Semantic phrasal similarity
CN115577080A (en) Question reply matching method, system, server and storage medium
CN114595370A (en) Model training and sorting method and device, electronic equipment and storage medium
CN115129863A (en) Intention recognition method, device, equipment, storage medium and computer program product
CN114942981A (en) Question-answer query method and device, electronic equipment and computer readable storage medium
CN110633363B (en) Text entity recommendation method based on NLP and fuzzy multi-criterion decision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination