CN112380870A - User intention analysis method and device, electronic equipment and computer storage medium - Google Patents

User intention analysis method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN112380870A
CN112380870A CN202011302192.3A CN202011302192A CN112380870A CN 112380870 A CN112380870 A CN 112380870A CN 202011302192 A CN202011302192 A CN 202011302192A CN 112380870 A CN112380870 A CN 112380870A
Authority
CN
China
Prior art keywords
text
intention
user
layer
semantic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011302192.3A
Other languages
Chinese (zh)
Inventor
李志韬
王健宗
程宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN202011302192.3A priority Critical patent/CN112380870A/en
Publication of CN112380870A publication Critical patent/CN112380870A/en
Priority to PCT/CN2021/082893 priority patent/WO2021208696A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Machine Translation (AREA)

Abstract

The invention relates to a data analysis technology, and discloses a user intention analysis method, which comprises the following steps: acquiring an input text of a user, and converting the input text into a semantic vector; performing intention prediction on the semantic vector to obtain a prediction intention label of the input text; performing feature extraction on the predicted intention label by using a feature extraction network, and generating a plurality of target intents according to the extracted features; calculating the priority of each target intention in the plurality of target intentions; and selecting a preset number of target intentions from the plurality of target intentions as user intentions according to the priority. In addition, the invention also relates to a blockchain technology, and the seal record can be stored in the nodes of the blockchain. The invention also provides a user intention analysis device, electronic equipment and a computer readable storage medium. The invention can improve the accuracy of identifying the user intention.

Description

User intention analysis method and device, electronic equipment and computer storage medium
Technical Field
The present invention relates to the field of data analysis technologies, and in particular, to a method and an apparatus for analyzing user intention, an electronic device, and a computer-readable storage medium.
Background
With the widespread use of intelligent customer service, more and more companies or enterprises use intelligent robots to automatically answer questions of users. In the automatic answering process, how to accurately identify the user's intention according to the user's question is an important point that people pay more and more attention to.
Most of the existing methods for identifying the user intention in the automatic answering process calculate the similarity between the question of the user and a preset standard question based on a similarity algorithm so as to identify the user intention of the question according to the similarity. However, because the language expression habits of different users are inconsistent, different users have a large difference in expression for the same intention, which results in low recognition accuracy when recognizing the intention of the user by using similarity in the conventional method.
Disclosure of Invention
The invention provides a user intention analysis method, a user intention analysis device and a computer readable storage medium, and mainly aims to solve the problem that the accuracy of identifying a user intention is not high.
In order to achieve the above object, the present invention provides a method for analyzing user intention, comprising:
acquiring an input text of a user, and converting the input text into a semantic vector;
performing intention prediction on the semantic vector to obtain a prediction intention label of the input text;
performing feature extraction on the predicted intention label by using a feature extraction network, and generating a plurality of target intents according to the extracted features;
calculating the priority of each target intention in the plurality of target intentions;
and selecting a preset number of target intentions from the plurality of target intentions as user intentions according to the priority.
Optionally, the converting the input text into a semantic vector includes:
constructing a text vectorization model;
acquiring a historical text, and carrying out preset entity marking on the historical text to obtain a training text;
performing iterative training on the text vectorization model by using the training text until the text vectorization model converges to obtain a trained text vectorization model;
and converting the input text by using the trained text vectorization model to obtain a semantic vector of the input text.
Optionally, the obtaining a training text by performing a preset entity tag on the historical text includes:
constructing a label set comprising a non-preset entity character label, a preset entity starting character label and a preset entity middle character label according to a preset entity;
and marking each character in the historical text by using the label in the label set to obtain a training text.
Optionally, the iteratively training the text vectorization model by using the training text until the text vectorization model converges includes:
inputting the training text into the text vectorization model for vector conversion to obtain a predicted text vector;
acquiring a standard text vector corresponding to the training text;
calculating a loss value between the predicted text vector and the standard text vector, and determining that the text vectorization model converges when the loss value is smaller than a preset loss threshold value.
Optionally, the performing, by using a feature extraction network, feature extraction on the predicted intention tag includes:
tagging a data representation of the predicted intent tag by a visual layer of a feature extraction network;
and performing feature extraction on the data representation of the visual layer mark by using a machine learning algorithm through a hidden layer of a feature extraction network.
Optionally, the performing, by the hidden layer of the feature extraction network, feature extraction on the data representation of the visual layer marker by using a machine learning algorithm includes:
performing feature extraction on the data representation of the visual layer marker by using a machine learning algorithm as follows:
Figure RE-GDA0002897283690000021
h is a data feature obtained by extracting features of the data representation described by the visual layer, Y is the data representation, w is a weight matrix between the visual layer and the hidden layer, and b is a bias vector of the hidden layer.
Optionally, the performing intent prediction on the semantic vector to obtain a predicted intent tag of the input text includes:
constructing an intention prediction network comprising a plurality of layers of down-sampling layers;
utilizing a forward downsampling layer in the intention prediction network to downsample the semantic vector to obtain forward semantic features;
utilizing a backward downsampling layer in the feature screening model to downsample the forward semantic features to obtain backward semantic features;
performing feature fusion on the obtained forward semantic features and backward semantic features to obtain fusion semantic features;
and using the fused semantic features as predicted intention labels of the input text.
In order to solve the above problems, the present invention also provides a user intention analysis device, including:
the vector conversion module is used for acquiring an input text of a user and converting the input text into a semantic vector;
the intention prediction module is used for carrying out intention prediction on the semantic vector to obtain a predicted intention label of the input text;
the characteristic extraction module is used for extracting the characteristics of the prediction intention labels by utilizing a characteristic extraction network and generating a plurality of target intents according to the extracted characteristics;
the priority calculation module is used for calculating the priority of each target intention in the plurality of target intentions;
and the intention screening module is used for selecting a preset number of target intentions as user intentions according to the priority.
In order to solve the above problem, the present invention also provides an electronic device, including:
a memory storing at least one instruction; and
and the processor executes the instructions stored in the memory to realize the user intention analysis method.
In order to solve the above problem, the present invention also provides a computer-readable storage medium having at least one instruction stored therein, where the at least one instruction is executed by a processor in an electronic device to implement the user intention analysis method described above.
According to the embodiment of the invention, the input text of the user is obtained, the input text is converted into the semantic vector, and the input text is converted into the semantic vector, so that the text information can be digitalized, and the efficiency of analyzing the input text subsequently can be improved; the semantic vectors are subjected to intention prediction to obtain the predicted intention labels of the input texts, so that the data volume in the semantic vectors containing a large number of semantics can be reduced, and the efficiency and the accuracy of the subsequent analysis of the user intention can be improved; the feature extraction network is utilized to extract features of the predicted intention labels and generate a plurality of target intentions according to the extracted features, so that intention prediction of the user is realized by utilizing the extracted features, and the accuracy of prediction of the intention of the user is improved; by calculating the priority and sequencing and screening the multiple target intents according to the priority, the accuracy of the screened target intents is improved. Therefore, the user intention analysis method, the user intention analysis device, the electronic equipment and the computer readable storage medium provided by the invention can solve the problem that the accuracy of identifying the user intention is not high.
Drawings
Fig. 1 is a schematic flow chart illustrating a user intention analysis method according to an embodiment of the present invention;
FIG. 2 is a functional block diagram of an apparatus for analyzing user's intention according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device for implementing the user intention analysis method according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The embodiment of the application provides a user intention analysis method. The execution subject of the user intention analysis method includes, but is not limited to, at least one of electronic devices such as a server and a terminal, which can be configured to execute the method provided by the embodiments of the present application. In other words, the user intention analysis method may be performed by software or hardware installed in the terminal device or the server device, and the software may be a blockchain platform. The server includes but is not limited to: a single server, a server cluster, a cloud server or a cloud server cluster, and the like.
Fig. 1 is a schematic flow chart of a user intention analysis method according to an embodiment of the present invention. In this embodiment, the user intention analysis method includes:
and S1, acquiring an input text of the user, and converting the input text into a semantic vector.
In the embodiment of the present invention, the input text of the user may be any text provided by the user and containing the user intention, for example, a text for the user to inquire some service information, a text for the user to consult some product, and the like.
According to the embodiment of the invention, the input text pre-stored by the user can be acquired from the block chain node by using the python statement with the data capturing function, and the efficiency of acquiring the input text can be improved by using the high throughput of the block chain node to the data.
In detail, the converting the input text into a semantic vector includes:
constructing a text vectorization model;
acquiring a historical text, and carrying out preset entity marking on the historical text to obtain a training text;
performing iterative training on the text vectorization model by using the training text until the text vectorization model converges to obtain a trained text vectorization model;
and converting the input text by using the trained text vectorization model to obtain a semantic vector of the input text.
In the embodiment of the invention, an initial vectorization model is constructed by utilizing a deep learning network model; in detail, a Bert base network model is used as an initial vectorization model, and a serialization labeling algorithm network is connected behind the initial vectorization model to obtain the text vectorization model, wherein the serialization labeling algorithm network is used for constraining the sequence of characters input to the initial vectorization model to obtain the entity text vectorization model.
In detail, the obtaining of the training text by performing the preset entity tagging on the historical text includes:
constructing a label set comprising a non-preset entity character label, a preset entity starting character label and a preset entity middle character label according to a preset entity;
and marking each character in the historical text by using the label in the label set to obtain a training text.
Specifically, the tag set includes a plurality of preset tags, such as a non-preset entity character tag, a preset entity start character tag, and a preset entity middle character tag, where the non-preset entity character tag is used to mark characters of a non-preset entity in the history text, the preset entity start character tag is used to mark start characters of a preset entity in the history text, and the preset entity middle character tag is used to mark characters of the preset entity in the history text except the start characters.
For example: the text information contained in the historical text is 'zero interest rate loan provided by a certain financial company', the preset entity is a financial entity, and the label entity set comprises: the text message "a financial company provides zero interest rate loan" is marked by using a tag entity set, a "financial" character is marked as a financial entity start character by using a financial entity start character tag, a "company" character is marked as a financial entity middle character by using a financial entity middle character tag, a "provided" character is marked as a non-financial entity character by using a non-financial entity character tag, a "zero interest rate" character is marked as a non-financial entity character by using a non-financial entity character tag, and a "loan" character is marked as a non-financial entity character by using a non-financial entity character tag.
Further, the iteratively training the text vectorization model by using the training text until the text vectorization model converges includes:
inputting the training text into the text vectorization model for vector conversion to obtain a predicted text vector;
acquiring a standard text vector corresponding to the training text;
calculating a loss value between the predicted text vector and the standard text vector, and determining that the text vectorization model converges when the loss value is smaller than a preset loss threshold value.
In the embodiment of the invention, the prestored standard text vector can be acquired from the database by using the python statement with the data grabbing function.
In detail, the embodiment of the present invention may calculate the loss value between the predicted text vector and the standard text vector by using a preset loss function, which includes, but is not limited to, a cross entropy loss function, a square error loss function, and a regular loss function. And when the loss value is smaller than a preset loss threshold value, the text vectorization model is converged, and the trained text vectorization model is obtained.
The embodiment of the invention converts the input text into the semantic vector, can realize the digitization of the text information, and is beneficial to improving the efficiency of analyzing the input text subsequently.
And S2, performing intention prediction on the semantic vector to obtain a predicted intention label of the input text.
In the embodiment of the present invention, the performing intent prediction on the semantic vector to obtain a predicted intent tag of the input text includes:
constructing an intention prediction network comprising a plurality of layers of down-sampling layers;
utilizing a forward downsampling layer in the intention prediction network to downsample the semantic vector to obtain forward semantic features;
utilizing a backward downsampling layer in the feature screening model to downsample the forward semantic features to obtain backward semantic features;
performing feature fusion on the obtained forward semantic features and backward semantic features to obtain fusion semantic features;
and using the fused semantic features as predicted intention labels of the input text.
In detail, the embodiment of the invention adopts an LSTM network (Long Short-Term Memory Net) to construct an intention prediction network comprising a plurality of layers of down-sampling layers, and utilizes a structure of the plurality of layers of down-sampling layers in the LSTM network to carry out down-sampling on semantic vectors for a plurality of times, thereby being beneficial to extracting more accurate semantic features and improving the accuracy of the generated prediction intention labels.
In particular, the forward downsampling layer is relative to the backward downsampling layer, e.g., the intended prediction network includes 4 downsampling layers, a first downsampling layer that downsamples a semantic vector being a forward downsampling layer relative to a second downsampling layer that downsamples a semantic vector, a third downsampling layer that downsamples a semantic vector, and a fourth downsampling layer that downsamples a semantic vector; the second down-sampling layer down-sampling the standard semantic vector is a backward down-sampling layer relative to the first down-sampling layer down-sampling the semantic vector, and so on.
In detail, when the current downsampling layer is the initial downsampling layer, the backward downsampling layer downsamples a result (forward semantic feature) obtained by the forward downsampling layer to obtain the backward semantic feature.
When the current down-sampling layer is not the initial down-sampling layer, the backward down-sampling layer down-samples the result (forward semantic feature) obtained by the forward down-sampling layer to obtain the backward semantic feature.
Specifically, for example, the chest image is downsampled in a first downsampling layer to obtain a first semantic feature;
down-sampling the first semantic features in a second down-sampling layer to obtain second semantic features;
down-sampling the second semantic features in a third down-sampling layer to obtain third semantic features;
down-sampling the third semantic features in a fourth down-sampling layer to obtain fourth semantic features;
and performing feature fusion on the first semantic feature, the second semantic feature, the third semantic feature and the fourth semantic feature to obtain a fusion semantic feature, and using the fusion semantic feature as a prediction intention label of the input text.
The embodiment of the invention carries out intention prediction on the semantic vector to obtain the predicted intention label of the input text, can reduce the data volume in the semantic vector containing a large number of semantics, and is favorable for improving the efficiency of analyzing the intention of the user subsequently.
And S3, performing feature extraction on the predicted intention labels by utilizing a feature extraction network, and generating a plurality of target intents according to the extracted features.
In an embodiment of the present invention, the feature extraction network includes a plurality of visual layers and a plurality of hidden layers, where the visual layers include a plurality of visual units, the hidden layers include a plurality of hidden units, the number of the visual layers corresponds to the number of the hidden layers, and the number of the visual units corresponds to the number of the hidden units.
In detail, the feature extraction of the predicted intention tag by using a feature extraction network includes:
tagging a data representation of the predicted intent tag by a visual layer of a feature extraction network;
and performing feature extraction on the data representation of the visual layer mark by using a machine learning algorithm through a hidden layer of a feature extraction network.
Specifically, in the process of feature extraction, one data representation in the intention label is marked through each visual unit in a visual layer of a feature extraction network, the data representation is extracted through each hidden unit in a hidden layer of the feature extraction network, and each hidden unit in the hidden layer extracts the data representation of the visual unit mark matched with the hidden unit based on a machine learning algorithm.
In an embodiment of the present invention, the states of the visual unit and the hidden unit are represented by boolean values, such as 0 and 1, where 0 represents an inactive state and 1 represents an active state. When the visual element and/or the hidden element is activated by the activation function, the data contained in the visual element may be transmitted to the hidden element matching the visual element.
In particular, the activation function of the visual unit and/or the hidden unit is as follows:
Figure RE-GDA0002897283690000081
wherein E (v, h, θ) is an activation value, I is the number of visual units in the visual layer, J is the number of hidden units in the hidden layer, a is a bias vector of the visual layer, b is a bias vector of the hidden layer, w is a weight matrix between the visual layer and the hidden layer, v is any visual unit in the visual layer, h is any hidden unit in the hidden layer, and θ is a preset error parameter.
When the activation value of the activation function is greater than an activation threshold, the visual unit and/or the hidden unit is activated by the activation function. After the visual unit and/or the hidden unit is activated by the activation function, data contained in the visual unit and/or the hidden unit is transmitted to the hidden unit matched with the visual unit.
Preferably, in the embodiment of the present invention, the visual unit in the visual layer is matched with the hidden unit in the hidden layer by the following matching algorithm:
Figure RE-GDA0002897283690000082
wherein P (v, h, θ) is a matching value, v is any visual unit in the visual layer, h is any hidden unit in the hidden layer, θ is a preset error parameter, Z is a normalization factor of the feature extraction network, and exp (-E (v, h, θ)) is an expectation of matching the visual unit v and the hidden unit h.
Preferably, the activated visual layer can transmit data to the activated hidden layer matching the visual layer only after the visual units in the visual layer are matched with the hidden units in the hidden layer.
Further, when a visual unit in the visual layer is activated, the probability that the corresponding hidden unit in the hidden layer is also activated is P (v)j=1|h;θ):
Figure RE-GDA0002897283690000091
Wherein v isjThe hidden unit is the jth hidden unit in the hidden layer, h is any hidden unit in the hidden layer, theta is a preset error parameter, J is the number of the hidden units in the hidden layer, w is a weight matrix between the visible layer and the hidden layer, b is a bias vector of the hidden layer, and delta is a preset probability coefficient.
When a hidden unit in a given hidden layer is activated, the corresponding visual unit in the visual layer is also activatedThe ratio is P (h)i=1|v;θ):
Figure RE-GDA0002897283690000092
Wherein h isiThe method comprises the steps of setting a number of visual units in a visual layer to be the ith visual unit in the visual layer, setting v to be any visual unit in the visual layer, setting theta to be a preset error parameter, setting I to be the number of the visual units in the visual layer, setting w to be a weight matrix between the visual layer and a hidden layer, setting a to be a bias vector of the visual layer, and setting delta to be a preset probability coefficient.
In the embodiment of the present invention, after a visual unit/hidden unit in the visual layer/hidden layer is activated, it is indicated that the hidden unit/visual unit is activated only when the probability that the hidden unit/visual unit matching with the visual unit/hidden unit is activated is 1.
In the embodiment of the invention, a plurality of visible layers and hidden layers in a plurality of feature extraction networks are superposed to realize more accurate feature extraction of the prediction intention label.
Specifically, the performing, by a hidden layer of a feature extraction network, feature extraction on the data representation of the visual layer marker by using a machine learning algorithm includes:
performing feature extraction on the data representation of the visual layer marker by using a machine learning algorithm as follows:
Figure RE-GDA0002897283690000101
h is a data feature obtained by extracting the feature of the data representation of the visible layer mark, Y is the data representation, w is a weight matrix between the visible layer and the hidden layer, and b is a bias vector of the hidden layer.
Further, the generating a plurality of target intents according to the extracted features includes: and calculating the similarity between the extracted features and a plurality of preset standard intentions, and determining the plurality of standard intentions with the similarity larger than a similarity threshold as the target intentions, wherein the similarity between the extracted features and the plurality of preset standard intentions can be calculated by utilizing a cosine similarity calculation method.
According to the embodiment of the invention, the feature extraction network is utilized to extract the features of the predicted intention labels and generate a plurality of target intentions according to the extracted features, and the extracted features are utilized to predict the intention of the user, so that the accuracy of predicting the intention of the user is improved.
And S4, calculating the priority of each target intention in the plurality of target intentions.
In this embodiment of the present invention, the calculating the priority of each target intention in the plurality of target intentions includes:
calculating a priority of each of the plurality of target intents using a priority algorithm as follows:
Pir=δ*Mk
wherein Pir is the priority, δ is a preset weight coefficient, MkIs the k-th target intention in the plurality of target intentions.
And S5, selecting a preset number of target intentions from the plurality of target intentions according to the priority as user intentions.
In the embodiment of the present invention, the selecting a preset number of target intentions according to the priority as user intentions includes:
sequencing the target intents according to the sequence of the priorities from large to small;
and selecting a preset number of target intentions from the plurality of target intentions which are sequenced from front to back as user intentions.
For example, the plurality of target intents includes: intention a, intention B, intention C, and intention D, wherein the priority of intention a is 50, the priority of intention B is 40, the priority of intention C is 60, and the priority of intention D is 30, and thus, the plurality of target intents are ranked in order of priority from large to small as: intention C, intention A, intention B, intention D; when the preset number is 2, selecting the intention C and the intention A as the user intentions from the multiple target intentions which are finished in the sequence from front to back.
In detail, the priority is calculated, and the plurality of target intentions are sorted and screened according to the priority, so that the accuracy of the screened target intentions is improved.
According to the embodiment of the invention, the input text of the user is obtained, the input text is converted into the semantic vector, and the input text is converted into the semantic vector, so that the text information can be digitalized, and the efficiency of analyzing the input text subsequently can be improved; the semantic vectors are subjected to intention prediction to obtain the predicted intention labels of the input texts, so that the data volume in the semantic vectors containing a large number of semantics can be reduced, and the efficiency and the accuracy of the subsequent analysis of the user intention can be improved; the feature extraction network is utilized to extract features of the predicted intention labels and generate a plurality of target intentions according to the extracted features, so that intention prediction of the user is realized by utilizing the extracted features, and the accuracy of prediction of the intention of the user is improved; by calculating the priority and sequencing and screening the multiple target intents according to the priority, the accuracy of the screened target intents is improved. Therefore, the user intention analysis method provided by the invention can solve the problem of low accuracy of identifying the user intention.
Fig. 2 is a functional block diagram of a user intention analyzing apparatus according to an embodiment of the present invention.
The user intention analysis device 100 according to the present invention may be installed in an electronic apparatus. According to the realized functions, the user intention analysis device 100 may include an electronic seal generation module 101, a two-dimensional code generation module 102, a two-dimensional code analysis module 103, a seal record search module 104, and an electronic seal verification module 105. The module of the present invention, which may also be referred to as a unit, refers to a series of computer program segments that can be executed by a processor of an electronic device and that can perform a fixed function, and that are stored in a memory of the electronic device.
In the present embodiment, the functions regarding the respective modules/units are as follows:
the vector conversion module 101 is configured to obtain an input text of a user, and convert the input text into a semantic vector;
the intention prediction module 102 is configured to perform intention prediction on the semantic vector to obtain a predicted intention tag of the input text;
the feature extraction module 103 is configured to perform feature extraction on the predicted intention tag by using a feature extraction network, and generate a plurality of target intents according to the extracted features;
the priority calculation module 104 is configured to calculate a priority of each target intention in the plurality of target intentions;
the intention screening module 105 is configured to select a preset number of target intentions as the user intentions according to the priority.
In detail, the specific implementation of each module of the user intention analysis device is as follows:
the vector conversion module 101 is configured to obtain an input text of a user, and convert the input text into a semantic vector.
In the embodiment of the present invention, the input text of the user may be any text provided by the user and containing the user intention, for example, a text for the user to inquire some service information, a text for the user to consult some product, and the like.
According to the embodiment of the invention, the input text pre-stored by the user can be acquired from the block chain node by using the python statement with the data capturing function, and the efficiency of acquiring the input text can be improved by using the high throughput of the block chain node to the data.
In detail, the vector conversion module 101 is specifically configured to:
acquiring an input text of a user;
constructing a text vectorization model;
acquiring a historical text, and carrying out preset entity marking on the historical text to obtain a training text;
performing iterative training on the text vectorization model by using the training text until the text vectorization model converges to obtain a trained text vectorization model;
and converting the input text by using the trained text vectorization model to obtain a semantic vector of the input text.
In the embodiment of the invention, an initial vectorization model is constructed by utilizing a deep learning network model; in detail, a Bert base network model is used as an initial vectorization model, and a serialization labeling algorithm network is connected behind the initial vectorization model to obtain the text vectorization model, wherein the serialization labeling algorithm network is used for constraining the sequence of characters input to the initial vectorization model to obtain the entity text vectorization model.
In detail, the obtaining of the training text by performing the preset entity tagging on the historical text includes:
constructing a label set comprising a non-preset entity character label, a preset entity starting character label and a preset entity middle character label according to a preset entity;
and marking each character in the historical text by using the label in the label set to obtain a training text.
Specifically, the tag set includes a plurality of preset tags, such as a non-preset entity character tag, a preset entity start character tag, and a preset entity middle character tag, where the non-preset entity character tag is used to mark characters of a non-preset entity in the history text, the preset entity start character tag is used to mark start characters of a preset entity in the history text, and the preset entity middle character tag is used to mark characters of the preset entity in the history text except the start characters.
For example: the text information contained in the historical text is 'zero interest rate loan provided by a certain financial company', the preset entity is a financial entity, and the label entity set comprises: the text message "a financial company provides zero interest rate loan" is marked by using a tag entity set, a "financial" character is marked as a financial entity start character by using a financial entity start character tag, a "company" character is marked as a financial entity middle character by using a financial entity middle character tag, a "provided" character is marked as a non-financial entity character by using a non-financial entity character tag, a "zero interest rate" character is marked as a non-financial entity character by using a non-financial entity character tag, and a "loan" character is marked as a non-financial entity character by using a non-financial entity character tag.
Further, the iteratively training the text vectorization model by using the training text until the text vectorization model converges includes:
inputting the training text into the text vectorization model for vector conversion to obtain a predicted text vector;
acquiring a standard text vector corresponding to the training text;
calculating a loss value between the predicted text vector and the standard text vector, and determining that the text vectorization model converges when the loss value is smaller than a preset loss threshold value.
In the embodiment of the invention, the prestored standard text vector can be acquired from the database by using the python statement with the data grabbing function.
In detail, the embodiment of the present invention may calculate the loss value between the predicted text vector and the standard text vector by using a preset loss function, which includes, but is not limited to, a cross entropy loss function, a square error loss function, and a regular loss function. And when the loss value is smaller than a preset loss threshold value, the text vectorization model is converged, and the trained text vectorization model is obtained.
The embodiment of the invention converts the input text into the semantic vector, can realize the digitization of the text information, and is beneficial to improving the efficiency of analyzing the input text subsequently.
The intention prediction module 102 is configured to perform intention prediction on the semantic vector to obtain a predicted intention tag of the input text.
In an embodiment of the present invention, the intention prediction module 102 is specifically configured to:
constructing an intention prediction network comprising a plurality of layers of down-sampling layers;
utilizing a forward downsampling layer in the intention prediction network to downsample the semantic vector to obtain forward semantic features;
utilizing a backward downsampling layer in the feature screening model to downsample the forward semantic features to obtain backward semantic features;
performing feature fusion on the obtained forward semantic features and backward semantic features to obtain fusion semantic features;
and using the fused semantic features as predicted intention labels of the input text.
In detail, the embodiment of the invention adopts an LSTM network (Long Short-Term Memory Net) to construct an intention prediction network comprising a plurality of layers of down-sampling layers, and utilizes a structure of the plurality of layers of down-sampling layers in the LSTM network to carry out down-sampling on semantic vectors for a plurality of times, thereby being beneficial to extracting more accurate semantic features and improving the accuracy of the generated prediction intention labels.
In particular, the forward downsampling layer is relative to the backward downsampling layer, e.g., the intended prediction network includes 4 downsampling layers, a first downsampling layer that downsamples a semantic vector being a forward downsampling layer relative to a second downsampling layer that downsamples a semantic vector, a third downsampling layer that downsamples a semantic vector, and a fourth downsampling layer that downsamples a semantic vector; the second down-sampling layer down-sampling the standard semantic vector is a backward down-sampling layer relative to the first down-sampling layer down-sampling the semantic vector, and so on.
In detail, when the current downsampling layer is the initial downsampling layer, the backward downsampling layer downsamples a result (forward semantic feature) obtained by the forward downsampling layer to obtain the backward semantic feature.
When the current down-sampling layer is not the initial down-sampling layer, the backward down-sampling layer down-samples the result (forward semantic feature) obtained by the forward down-sampling layer to obtain the backward semantic feature.
Specifically, for example, the chest image is downsampled in a first downsampling layer to obtain a first semantic feature;
down-sampling the first semantic features in a second down-sampling layer to obtain second semantic features;
down-sampling the second semantic features in a third down-sampling layer to obtain third semantic features;
down-sampling the third semantic features in a fourth down-sampling layer to obtain fourth semantic features;
and performing feature fusion on the first semantic feature, the second semantic feature, the third semantic feature and the fourth semantic feature to obtain a fusion semantic feature, and using the fusion semantic feature as a prediction intention label of the input text.
The embodiment of the invention carries out intention prediction on the semantic vector to obtain the predicted intention label of the input text, can reduce the data volume in the semantic vector containing a large number of semantics, and is favorable for improving the efficiency of analyzing the intention of the user subsequently.
The feature extraction module 103 is configured to perform feature extraction on the predicted intention tag by using a feature extraction network, and generate a plurality of target intents according to the extracted features.
In an embodiment of the present invention, the feature extraction network includes a plurality of visual layers and a plurality of hidden layers, where the visual layers include a plurality of visual units, the hidden layers include a plurality of hidden units, the number of the visual layers corresponds to the number of the hidden layers, and the number of the visual units corresponds to the number of the hidden units.
In detail, the feature extraction module 103 is specifically configured to:
tagging a data representation of the predicted intent tag by a visual layer of a feature extraction network;
performing feature extraction on the data representation of the visible layer mark by using a machine learning algorithm through a hidden layer of a feature extraction network;
and generating a plurality of target intents according to the extracted features.
Specifically, in the process of feature extraction, one data representation in the intention label is marked through each visual unit in a visual layer of a feature extraction network, the data representation is extracted through each hidden unit in a hidden layer of the feature extraction network, and each hidden unit in the hidden layer extracts the data representation of the visual unit mark matched with the hidden unit based on a machine learning algorithm.
In an embodiment of the present invention, the states of the visual unit and the hidden unit are represented by boolean values, such as 0 and 1, where 0 represents an inactive state and 1 represents an active state. When the visual element and/or the hidden element is activated by the activation function, the data contained in the visual element may be transmitted to the hidden element matching the visual element.
In particular, the activation function of the visual unit and/or the hidden unit is as follows:
Figure RE-GDA0002897283690000151
wherein E (v, h, θ) is an activation value, I is the number of visual units in the visual layer, J is the number of hidden units in the hidden layer, a is a bias vector of the visual layer, b is a bias vector of the hidden layer, w is a weight matrix between the visual layer and the hidden layer, v is any visual unit in the visual layer, h is any hidden unit in the hidden layer, and θ is a preset error parameter.
When the activation value of the activation function is greater than an activation threshold, the visual unit and/or the hidden unit is activated by the activation function. After the visual unit and/or the hidden unit is activated by the activation function, data contained in the visual unit and/or the hidden unit is transmitted to the hidden unit matched with the visual unit.
Preferably, in the embodiment of the present invention, the visual unit in the visual layer is matched with the hidden unit in the hidden layer by the following matching algorithm:
Figure RE-GDA0002897283690000161
wherein P (v, h, θ) is a matching value, v is any visual unit in the visual layer, h is any hidden unit in the hidden layer, θ is a preset error parameter, Z is a normalization factor of the feature extraction network, and exp (-E (v, h, θ)) is an expectation of matching the visual unit v and the hidden unit h.
Preferably, the activated visual layer can transmit data to the activated hidden layer matching the visual layer only after the visual units in the visual layer are matched with the hidden units in the hidden layer.
Further, when a visual unit in the visual layer is activated, the probability that the corresponding hidden unit in the hidden layer is also activated is P (v)j=1|h;θ):
Figure RE-GDA0002897283690000162
Wherein v isjThe hidden unit is the jth hidden unit in the hidden layer, h is any hidden unit in the hidden layer, theta is a preset error parameter, J is the number of the hidden units in the hidden layer, w is a weight matrix between the visible layer and the hidden layer, b is a bias vector of the hidden layer, and delta is a preset probability coefficient.
When a hidden unit in the hidden layer is activated, the probability that the corresponding visual unit in the visual layer is also activated is P (h)i=1|v;θ):
Figure RE-GDA0002897283690000163
Wherein h isiIs the ith visual unit in the visual layer, v is any visual unit in the visual layer, theta is a preset error parameter, I is the number of visual units in the visual layer, and w is the visual layer and the visual layerAnd (3) hiding a direct weight matrix of the layer, wherein a is a bias vector of the visible layer, and delta is a preset probability coefficient.
In the embodiment of the present invention, after a visual unit/hidden unit in the visual layer/hidden layer is activated, it is indicated that the hidden unit/visual unit is activated only when the probability that the hidden unit/visual unit matching with the visual unit/hidden unit is activated is 1.
In the embodiment of the invention, a plurality of visible layers and hidden layers in a plurality of feature extraction networks are superposed to realize more accurate feature extraction of the prediction intention label.
Specifically, the performing, by a hidden layer of a feature extraction network, feature extraction on the data representation of the visual layer marker by using a machine learning algorithm includes:
performing feature extraction on the data representation of the visual layer marker by using a machine learning algorithm as follows:
Figure RE-GDA0002897283690000171
h is a data feature obtained by extracting the feature of the data representation of the visible layer mark, Y is the data representation, w is a weight matrix between the visible layer and the hidden layer, and b is a bias vector of the hidden layer.
Further, the generating a plurality of target intents according to the extracted features includes: and calculating the similarity between the extracted features and a plurality of preset standard intentions, and determining the plurality of standard intentions with the similarity larger than a similarity threshold as the target intentions, wherein the similarity between the extracted features and the plurality of preset standard intentions can be calculated by utilizing a cosine similarity calculation method.
According to the embodiment of the invention, the feature extraction network is utilized to extract the features of the predicted intention labels and generate a plurality of target intentions according to the extracted features, and the extracted features are utilized to predict the intention of the user, so that the accuracy of predicting the intention of the user is improved.
The priority calculation module 104 is configured to calculate a priority of each target intent of the plurality of target intentions.
In this embodiment of the present invention, the priority calculation module 104 is specifically configured to:
calculating a priority of each of the plurality of target intents using a priority algorithm as follows:
Pir=δ*Mk
wherein Pir is the priority, δ is a preset weight coefficient, MkIs the k-th target intention in the plurality of target intentions.
The intention screening module 105 is configured to select a preset number of target intentions as the user intentions according to the priority.
In an embodiment of the present invention, the intention screening module 105 is specifically configured to:
sequencing the target intents according to the sequence of the priorities from large to small;
and selecting a preset number of target intentions from the plurality of target intentions which are sequenced from front to back as user intentions.
For example, the plurality of target intents includes: intention a, intention B, intention C, and intention D, wherein the priority of intention a is 50, the priority of intention B is 40, the priority of intention C is 60, and the priority of intention D is 30, and thus, the plurality of target intents are ranked in order of priority from large to small as: intention C, intention A, intention B, intention D; when the preset number is 2, selecting the intention C and the intention A as the user intentions from the multiple target intentions which are finished in the sequence from front to back.
In detail, the priority is calculated, and the plurality of target intentions are sorted and screened according to the priority, so that the accuracy of the screened target intentions is improved.
According to the embodiment of the invention, the input text of the user is obtained, the input text is converted into the semantic vector, and the input text is converted into the semantic vector, so that the text information can be digitalized, and the efficiency of analyzing the input text subsequently can be improved; the semantic vectors are subjected to intention prediction to obtain the predicted intention labels of the input texts, so that the data volume in the semantic vectors containing a large number of semantics can be reduced, and the efficiency and the accuracy of the subsequent analysis of the user intention can be improved; the feature extraction network is utilized to extract features of the predicted intention labels and generate a plurality of target intentions according to the extracted features, so that intention prediction of the user is realized by utilizing the extracted features, and the accuracy of prediction of the intention of the user is improved; by calculating the priority and sequencing and screening the multiple target intents according to the priority, the accuracy of the screened target intents is improved. Therefore, the user intention analysis device provided by the invention can solve the problem of low accuracy of identifying the user intention.
Fig. 3 is a schematic structural diagram of an electronic device for implementing a user intention analysis method according to an embodiment of the present invention.
The electronic device 1 may comprise a processor 10, a memory 11 and a bus, and may further comprise a computer program, such as a user intention analysis program 12, stored in the memory 11 and executable on the processor 10.
The memory 11 includes at least one type of readable storage medium, which includes flash memory, removable hard disk, multimedia card, card-type memory (e.g., SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, etc. The memory 11 may in some embodiments be an internal storage unit of the electronic device 1, such as a removable hard disk of the electronic device 1. The memory 11 may also be an external storage device of the electronic device 1 in other embodiments, such as a plug-in mobile hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the electronic device 1. Further, the memory 11 may also include both an internal storage unit and an external storage device of the electronic device 1. The memory 11 may be used not only to store application software installed in the electronic device 1 and various types of data, such as codes of the user intention analysis program 12, but also to temporarily store data that has been output or is to be output.
The processor 10 may be composed of an integrated circuit in some embodiments, for example, a single packaged integrated circuit, or may be composed of a plurality of integrated circuits packaged with the same or different functions, including one or more Central Processing Units (CPUs), microprocessors, digital Processing chips, graphics processors, and combinations of various control chips. The processor 10 is a Control Unit (Control Unit) of the electronic device, connects various components of the whole electronic device by using various interfaces and lines, and executes various functions and processes data of the electronic device 1 by running or executing programs or modules (e.g., user intention analyzing programs, etc.) stored in the memory 11 and calling data stored in the memory 11.
The bus may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. The bus is arranged to enable connection communication between the memory 11 and at least one processor 10 or the like.
Fig. 3 only shows an electronic device with components, and it will be understood by a person skilled in the art that the structure shown in fig. 3 does not constitute a limitation of the electronic device 1, and may comprise fewer or more components than shown, or a combination of certain components, or a different arrangement of components.
For example, although not shown, the electronic device 1 may further include a power supply (such as a battery) for supplying power to each component, and preferably, the power supply may be logically connected to the at least one processor 10 through a power management device, so as to implement functions of charge management, discharge management, power consumption management, and the like through the power management device. The power supply may also include any component of one or more dc or ac power sources, recharging devices, power failure detection circuitry, power converters or inverters, power status indicators, and the like. The electronic device 1 may further include various sensors, a bluetooth module, a Wi-Fi module, and the like, which are not described herein again.
Further, the electronic device 1 may further include a network interface, and optionally, the network interface may include a wired interface and/or a wireless interface (such as a WI-FI interface, a bluetooth interface, etc.), which are generally used for establishing a communication connection between the electronic device 1 and other electronic devices.
Optionally, the electronic device 1 may further comprise a user interface, which may be a Display (Display), an input unit (such as a Keyboard), and optionally a standard wired interface, a wireless interface. Alternatively, in some embodiments, the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode) touch device, or the like. The display, which may also be referred to as a display screen or display unit, is suitable for displaying information processed in the electronic device 1 and for displaying a visualized user interface, among other things.
It is to be understood that the described embodiments are for purposes of illustration only and that the scope of the appended claims is not limited to such structures.
The user intention analysis program 12 stored in the memory 11 of the electronic device 1 is a combination of instructions that, when executed in the processor 10, may implement:
acquiring an input text of a user, and converting the input text into a semantic vector;
performing intention prediction on the semantic vector to obtain a prediction intention label of the input text;
performing feature extraction on the predicted intention label by using a feature extraction network, and generating a plurality of target intents according to the extracted features;
calculating the priority of each target intention in the plurality of target intentions;
and selecting a preset number of target intentions from the plurality of target intentions as user intentions according to the priority.
Specifically, the specific implementation method of the processor 10 for the instruction may refer to the marks of the relevant steps in the embodiment corresponding to fig. 1, which are not described herein again.
Further, the integrated modules/units of the electronic device 1, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. The computer readable storage medium may be volatile or non-volatile. For example, the computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
The present invention also provides a computer-readable storage medium, storing a computer program which, when executed by a processor of an electronic device, may implement:
acquiring an input text of a user, and converting the input text into a semantic vector;
performing intention prediction on the semantic vector to obtain a prediction intention label of the input text;
performing feature extraction on the predicted intention label by using a feature extraction network, and generating a plurality of target intents according to the extracted features;
calculating the priority of each target intention in the plurality of target intentions;
and selecting a preset number of target intentions from the plurality of target intentions as user intentions according to the priority.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus, device and method can be implemented in other ways. For example, the above labeled device embodiments are only illustrative, for example, the division of the modules is only one logical functional division, and other division ways may be available in actual implementation.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned.
The block chain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism, an encryption algorithm and the like. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product service layer, an application service layer, and the like.
Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A method of analyzing user intent, the method comprising:
acquiring an input text of a user, and converting the input text into a semantic vector;
performing intention prediction on the semantic vector to obtain a prediction intention label of the input text;
performing feature extraction on the predicted intention label by using a feature extraction network, and generating a plurality of target intents according to the extracted features;
calculating the priority of each target intention in the plurality of target intentions;
and selecting a preset number of target intentions from the plurality of target intentions as user intentions according to the priority.
2. The method of user intent analysis according to claim 1, wherein said converting the input text into semantic vectors comprises:
constructing a text vectorization model;
acquiring a historical text, and carrying out preset entity marking on the historical text to obtain a training text;
performing iterative training on the text vectorization model by using the training text until the text vectorization model converges to obtain a trained text vectorization model;
and converting the input text by using the trained text vectorization model to obtain a semantic vector of the input text.
3. The method for analyzing user's intention according to claim 2, wherein the pre-setting entity labels the historical texts to obtain training texts comprises:
constructing a label set comprising a non-preset entity character label, a preset entity starting character label and a preset entity middle character label according to a preset entity;
and marking each character in the historical text by using the label in the label set to obtain a training text.
4. The method of analyzing user intent according to claim 2, wherein the iteratively training the text-vectorization model using the training text until the text-vectorization model converges comprises:
inputting the training text into the text vectorization model for vector conversion to obtain a predicted text vector;
acquiring a standard text vector corresponding to the training text;
calculating a loss value between the predicted text vector and the standard text vector, and determining that the text vectorization model converges when the loss value is smaller than a preset loss threshold value.
5. The method of analyzing user intent according to claim 1, wherein said feature extracting the predicted intent tag using a feature extraction network comprises:
tagging a data representation of the predicted intent tag by a visual layer of a feature extraction network;
and performing feature extraction on the data representation of the visual layer mark by using a machine learning algorithm through a hidden layer of a feature extraction network.
6. The method of user intent analysis according to claim 5, wherein said feature extracting the data representation of the visual layer markup by a hidden layer of a feature extraction network using a machine learning algorithm comprises:
performing feature extraction on the data representation of the visual layer marker by using a machine learning algorithm as follows:
Figure FDA0002787178940000021
h is a data feature obtained by extracting features of the data representation described by the visual layer, Y is the data representation, w is a weight matrix between the visual layer and the hidden layer, and b is a bias vector of the hidden layer.
7. The method for analyzing user intention according to any one of claims 1 to 6, wherein the performing intention prediction on the semantic vector to obtain a predicted intention tag of the input text comprises:
constructing an intention prediction network comprising a plurality of layers of down-sampling layers;
utilizing a forward downsampling layer in the intention prediction network to downsample the semantic vector to obtain forward semantic features;
utilizing a backward downsampling layer in the feature screening model to downsample the forward semantic features to obtain backward semantic features;
performing feature fusion on the obtained forward semantic features and backward semantic features to obtain fusion semantic features;
and using the fused semantic features as predicted intention labels of the input text.
8. A user intention analysis apparatus, characterized in that the apparatus comprises:
the vector conversion module is used for acquiring an input text of a user and converting the input text into a semantic vector;
the intention prediction module is used for carrying out intention prediction on the semantic vector to obtain a predicted intention label of the input text;
the characteristic extraction module is used for extracting the characteristics of the prediction intention labels by utilizing a characteristic extraction network and generating a plurality of target intents according to the extracted characteristics;
the priority calculation module is used for calculating the priority of each target intention in the plurality of target intentions;
and the intention screening module is used for selecting a preset number of target intentions as user intentions according to the priority.
9. An electronic device, characterized in that the electronic device comprises:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform a user intent analysis method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the user intention analysis method according to any one of claims 1 to 7.
CN202011302192.3A 2020-11-19 2020-11-19 User intention analysis method and device, electronic equipment and computer storage medium Pending CN112380870A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011302192.3A CN112380870A (en) 2020-11-19 2020-11-19 User intention analysis method and device, electronic equipment and computer storage medium
PCT/CN2021/082893 WO2021208696A1 (en) 2020-11-19 2021-03-25 User intention analysis method, apparatus, electronic device, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011302192.3A CN112380870A (en) 2020-11-19 2020-11-19 User intention analysis method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112380870A true CN112380870A (en) 2021-02-19

Family

ID=74584375

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011302192.3A Pending CN112380870A (en) 2020-11-19 2020-11-19 User intention analysis method and device, electronic equipment and computer storage medium

Country Status (2)

Country Link
CN (1) CN112380870A (en)
WO (1) WO2021208696A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021208696A1 (en) * 2020-11-19 2021-10-21 平安科技(深圳)有限公司 User intention analysis method, apparatus, electronic device, and computer storage medium
CN114254622A (en) * 2021-12-10 2022-03-29 马上消费金融股份有限公司 Intention identification method and device
CN114281959A (en) * 2021-10-27 2022-04-05 腾讯科技(深圳)有限公司 Statement processing method, statement processing device, statement processing equipment, statement processing medium and computer program product
CN114398903A (en) * 2022-01-21 2022-04-26 平安科技(深圳)有限公司 Intention recognition method and device, electronic equipment and storage medium
CN115309983A (en) * 2022-07-21 2022-11-08 国家康复辅具研究中心 Assistive device adapting method, system and storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114548925B (en) * 2022-02-21 2024-04-30 中国平安人寿保险股份有限公司 Online activity invitation method, device, equipment and storage medium
CN114722281B (en) * 2022-04-07 2024-04-12 平安科技(深圳)有限公司 Training course configuration method and device based on user portrait and user course selection behavior
CN115757900B (en) * 2022-12-20 2023-08-01 创贸科技(深圳)集团有限公司 User demand analysis method and system applying artificial intelligent model
CN116189193B (en) * 2023-04-25 2023-11-10 杭州镭湖科技有限公司 Data storage visualization method and device based on sample information

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI666558B (en) * 2018-11-20 2019-07-21 財團法人資訊工業策進會 Semantic analysis method, semantic analysis system, and non-transitory computer-readable medium
CN109858030B (en) * 2019-02-11 2020-11-06 北京邮电大学 Two-way intent slot value cross-correlation task-based dialog understanding system and method
CN109992671A (en) * 2019-04-10 2019-07-09 出门问问信息科技有限公司 Intension recognizing method, device, equipment and storage medium
CN110928997A (en) * 2019-12-04 2020-03-27 北京文思海辉金信软件有限公司 Intention recognition method and device, electronic equipment and readable storage medium
CN111860661B (en) * 2020-07-24 2024-04-30 中国平安财产保险股份有限公司 Data analysis method and device based on user behaviors, electronic equipment and medium
CN112380870A (en) * 2020-11-19 2021-02-19 平安科技(深圳)有限公司 User intention analysis method and device, electronic equipment and computer storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021208696A1 (en) * 2020-11-19 2021-10-21 平安科技(深圳)有限公司 User intention analysis method, apparatus, electronic device, and computer storage medium
CN114281959A (en) * 2021-10-27 2022-04-05 腾讯科技(深圳)有限公司 Statement processing method, statement processing device, statement processing equipment, statement processing medium and computer program product
CN114281959B (en) * 2021-10-27 2024-03-19 腾讯科技(深圳)有限公司 Statement processing method, device, equipment, medium and computer program product
CN114254622A (en) * 2021-12-10 2022-03-29 马上消费金融股份有限公司 Intention identification method and device
CN114398903A (en) * 2022-01-21 2022-04-26 平安科技(深圳)有限公司 Intention recognition method and device, electronic equipment and storage medium
CN114398903B (en) * 2022-01-21 2023-06-20 平安科技(深圳)有限公司 Intention recognition method, device, electronic equipment and storage medium
CN115309983A (en) * 2022-07-21 2022-11-08 国家康复辅具研究中心 Assistive device adapting method, system and storage medium

Also Published As

Publication number Publication date
WO2021208696A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
CN112380870A (en) User intention analysis method and device, electronic equipment and computer storage medium
CN112597312A (en) Text classification method and device, electronic equipment and readable storage medium
CN113157927B (en) Text classification method, apparatus, electronic device and readable storage medium
CN112988963B (en) User intention prediction method, device, equipment and medium based on multi-flow nodes
CN114398557B (en) Information recommendation method and device based on double images, electronic equipment and storage medium
CN113807973B (en) Text error correction method, apparatus, electronic device and computer readable storage medium
CN113704429A (en) Semi-supervised learning-based intention identification method, device, equipment and medium
CN115309864A (en) Intelligent sentiment classification method and device for comment text, electronic equipment and medium
CN113656690B (en) Product recommendation method and device, electronic equipment and readable storage medium
CN114880449A (en) Reply generation method and device of intelligent question answering, electronic equipment and storage medium
CN114840684A (en) Map construction method, device and equipment based on medical entity and storage medium
CN113869456A (en) Sampling monitoring method and device, electronic equipment and storage medium
CN113157739A (en) Cross-modal retrieval method and device, electronic equipment and storage medium
CN112269875A (en) Text classification method and device, electronic equipment and storage medium
CN116450829A (en) Medical text classification method, device, equipment and medium
CN116340516A (en) Entity relation cluster extraction method, device, equipment and storage medium
CN113591881B (en) Intention recognition method and device based on model fusion, electronic equipment and medium
CN113706019B (en) Service capability analysis method, device, equipment and medium based on multidimensional data
CN114610854A (en) Intelligent question and answer method, device, equipment and storage medium
CN114595321A (en) Question marking method and device, electronic equipment and storage medium
CN115510188A (en) Text keyword association method, device, equipment and storage medium
CN115346095A (en) Visual question answering method, device, equipment and storage medium
CN114943306A (en) Intention classification method, device, equipment and storage medium
CN113723114A (en) Semantic analysis method, device and equipment based on multi-intent recognition and storage medium
CN113706207A (en) Order transaction rate analysis method, device, equipment and medium based on semantic analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination