CN109858022A - A kind of user's intension recognizing method, device, computer equipment and storage medium - Google Patents

A kind of user's intension recognizing method, device, computer equipment and storage medium Download PDF

Info

Publication number
CN109858022A
CN109858022A CN201910008318.7A CN201910008318A CN109858022A CN 109858022 A CN109858022 A CN 109858022A CN 201910008318 A CN201910008318 A CN 201910008318A CN 109858022 A CN109858022 A CN 109858022A
Authority
CN
China
Prior art keywords
vector
sample
user
recognition
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201910008318.7A
Other languages
Chinese (zh)
Inventor
王健宗
程宁
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910008318.7A priority Critical patent/CN109858022A/en
Publication of CN109858022A publication Critical patent/CN109858022A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Machine Translation (AREA)

Abstract

The invention discloses a kind of user's intension recognizing method, device, computer equipment and storage mediums, are applied to depth learning technology field, for solving the problems, such as that user's intention assessment accuracy is low.The method include that obtaining the target text of intention to be identified;Vectorization processing is carried out to the target text, obtains object vector;Using the object vector as input investment to preparatory trained Recognition with Recurrent Neural Network, obtain the objective result vector of the Recognition with Recurrent Neural Network output, each element in the objective result vector is respectively that each pre-set user is intended to corresponding first probability value, and the first probability value characterizes the target text and belongs to the probability that corresponding pre-set user is intended to;The highest pre-set user intention of first probability value is determined as the corresponding target user of the target text to be intended to.

Description

A kind of user's intension recognizing method, device, computer equipment and storage medium
Technical field
The present invention relates to depth learning technology fields more particularly to a kind of user's intension recognizing method, device, computer to set Standby and storage medium.
Background technique
On the market, by user art accurately hold user be intended to facilitate transaction it is very useful.For example, in phone Promote scene in user talk about art intention assessment for whether can successfully promote the sale of products it is most important, talk about art as user's heart The sense of reality and domestic demand of user are divulged out in the external manifestation of idea, if can correctly capture user's by talking about art It is intended to, the success rate of distribution can be improved, increase enterprise's business revenue and promote the popularity of brand, the body of user will not be influenced It tests.
Currently, most enterprise is all to employ contact staff and user and link up to contact, and rely on the experience of contact staff The true intention of user is judged with knowledge, facilitates transaction.But there are gaps for experience, knowledge between different contact staff, add The case where subjective factor of upper people influences, and is easy to appear the true intention erroneous judgement to user, causes the accuracy of intention assessment low Under.
Summary of the invention
The embodiment of the present invention provides a kind of user's intension recognizing method, device, computer equipment and storage medium, to solve The low problem of user's intention assessment accuracy.
A kind of user's intension recognizing method, comprising:
Obtain the target text of intention to be identified;
Vectorization processing is carried out to the target text, obtains object vector;
Using the object vector as input investment to preparatory trained Recognition with Recurrent Neural Network, the circulation nerve is obtained The objective result vector of network output, each element in the objective result vector are respectively that each pre-set user is intended to correspond to The first probability value, the first probability value characterizes the target text and belongs to the probability that corresponding pre-set user is intended to;
The highest pre-set user intention of first probability value is determined as the corresponding target user of the target text to be intended to.
A kind of user's intention assessment device, comprising:
Target text obtains module, for obtaining the target text of intention to be identified;
Text vector module obtains object vector for carrying out vectorization processing to the target text;
Vector input module, for using the object vector as input investment to preparatory trained circulation nerve net Network, obtains the objective result vector of the Recognition with Recurrent Neural Network output, and each element in the objective result vector is respectively Each pre-set user is intended to corresponding first probability value, and the first probability value characterizes the target text and belongs to corresponding default use The probability that family is intended to;
It is intended to determining module, it is corresponding for the highest pre-set user intention of the first probability value to be determined as the target text Target user be intended to.
A kind of computer equipment, including memory, processor and storage are in the memory and can be in the processing The computer program run on device, the processor realize above-mentioned user's intension recognizing method when executing the computer program Step.
A kind of computer readable storage medium, the computer-readable recording medium storage have computer program, the meter The step of calculation machine program realizes above-mentioned user's intension recognizing method when being executed by processor.
Above-mentioned user's intension recognizing method, device, computer equipment and storage medium, firstly, obtaining intention to be identified Target text;Then, vectorization processing is carried out to the target text, obtains object vector;Then, the object vector is made For input investment to preparatory trained Recognition with Recurrent Neural Network, the objective result vector of the Recognition with Recurrent Neural Network output is obtained, Each element in the objective result vector is respectively that each pre-set user is intended to corresponding first probability value, the first probability value It characterizes the target text and belongs to the probability that corresponding pre-set user is intended to;Finally, by the highest default use of the first probability value Family is intended to be determined as the corresponding target user's intention of the target text.As it can be seen that the present invention can be followed by trained in advance Ring neural network accurately identifies the true intention of user from target text, not only avoids experience, there are gaps for knowledge Lead to the case where identifying deviation, and the subjective factor for eliminating people influences, and improves the accuracy of intention assessment, helps to look forward to Industry holds the true intention of user and facilitates transaction.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below by institute in the description to the embodiment of the present invention Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only some implementations of the invention Example, for those of ordinary skill in the art, without any creative labor, can also be according to these attached drawings Obtain other attached drawings.
Fig. 1 is an application environment schematic diagram of user's intension recognizing method in one embodiment of the invention;
Fig. 2 is a flow chart of user's intension recognizing method in one embodiment of the invention;
Fig. 3 is that process of user's intension recognizing method step 102 under an application scenarios is shown in one embodiment of the invention It is intended to;
Fig. 4 is that user's intension recognizing method pre-processes target text under an application scenarios in one embodiment of the invention Flow diagram;
Fig. 5 is user's intension recognizing method training Recognition with Recurrent Neural Network under an application scenarios in one embodiment of the invention Flow diagram;
Fig. 6 is structural schematic diagram of user's intention assessment device under an application scenarios in one embodiment of the invention;
Fig. 7 is the structural schematic diagram of user's intention assessment device under another application scenarios in one embodiment of the invention;
Fig. 8 is structural schematic diagram of the sample input module under an application scenarios in one embodiment of the invention;
Fig. 9 is a schematic diagram of computer equipment in one embodiment of the invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
User's intension recognizing method provided by the present application, can be applicable in the application environment such as Fig. 1, wherein client is logical Network is crossed to be communicated with server.Wherein, which can be, but not limited to various personal computers, laptop, intelligence It can mobile phone, tablet computer and portable wearable device.Server can use independent server either multiple server groups At server cluster realize.
In one embodiment, it as shown in Fig. 2, providing a kind of user's intension recognizing method, applies in Fig. 1 in this way It is illustrated, includes the following steps: for server
101, the target text of intention to be identified is obtained;
In the present embodiment, server can need to obtain meaning to be identified according to the needs of actual use or application scenarios The target text of figure.For example, server can be connect with client communication, which is supplied to the consulting of the user in certain place Problem, user pass through the microphone input phonetic problem of client, which is uploaded to server, server by client Text is obtained after the phonetic problem sound is turned word, the text is the target text of intention to be identified.Alternatively, server can also be with The task that art text identification user is intended to high-volume is executed, certain database collects largely art text in advance, then By network by multiple words art File Transfers to server, server needs to carry out intention assessment respectively to these words art texts, To which these words art texts are respectively each target text.It is understood that server can also obtain in several ways To the target text of these intentions to be identified, this is no longer excessively repeated, it is believed that, as long as server identification is needed to be intended to Text can be used as target text.
It should be noted that text described in the present embodiment generally refers to words art text, i.e., by people, what is said or talked about passes through sound Turn the content of text that word obtains.
102, vectorization processing is carried out to the target text, obtains object vector;
After getting target text, for the ease of the identification and study of following cycle neural network, server needs Vectorization processing is carried out to the target text, that is, the mode for converting the text to vector indicates, to obtain object vector.Specifically Ground, server can be recorded target text in the form of data matrix, each words in data matrix, in target text A row vector being mapped as in the data matrix.
For ease of understanding, under a concrete application scene, as shown in figure 3, further, the step 102 is specific May include:
201, each one-dimensional row vector, the word are converted for words each in the target text using preset dictionary Allusion quotation has recorded the corresponding relationship between words and each one-dimensional row vector;
202, each one-dimensional row vector is formed into a two dimension according to the order of each words in the target text Vector is as object vector.
For above-mentioned steps 201, server is previously provided with dictionary, the dictionary have recorded each words and each one-dimensional row to One-to-one relationship before amount.For example, can be set, " I " be corresponding with " No. 1 row vector ", and "and" and " No. 2 row vectors " are right It answers, " you " and " No. 3 row vectors " are corresponding ..., the dictionary is improved by all words as exhaustive as possible, thus when needing to turn When changing the target text, server can be converted words each in the target text to using preset dictionary each one-dimensional Row vector.Illustrate, it is assumed that the target text is " I go to have a meal with you ", is learnt through queries dictionary: " I " with " No. 1 row to Amount " is corresponding, "and" is corresponded to " No. 2 row vectors " corresponding, " you " with " No. 3 row vectors ", " going " corresponds to " No. 4 row vectors ", " eats Meal " is corresponding with " No. 5 row vectors ", to respectively obtain 1-5 row vector.Wherein, above-mentioned 1-5 row vector is to refer to number For 1,2,3,4,5 row vector, specific each row vector should be the one-dimensional matrix comprising multiple elements, such as [7,51,423, 50,0] this one-dimensional row vector can be defined as k row vector, k is greater than when dictionary is arranged for an one-dimensional row vector Equal to 1.
Preferably, the building of the dictionary can be completed by the way of being arranged automatically, and server can use the word on one side The dictionary is arranged in allusion quotation on one side, specifically may is that when needing to convert the text to one-dimensional row vector, and server can obtain one by one Words in the text, and the corresponding relationship for whether recording the words Yu some one-dimensional row vector is inquired in the dictionary;If so, Then server obtains one-dimensional row vector corresponding with the words;If no, which is increased newly into the dictionary, and distribute one A unappropriated one-dimensional row vector is corresponding with the words, and then server obtains one-dimensional row vector corresponding with the words;Service After the corresponding one-dimensional row vector of device all words in getting the text, i.e., executable following step 202 carries out bivector Building, meanwhile, also the unallocated words in dictionary before in the text is increased newly into dictionary, is realized to the complete of dictionary It is kind.
It should be noted that unappropriated one-dimensional row vector can be manually set by staff when dictionary is arranged, The term vector that websites provide such as existing term vector can be got from third-party platform, for example can be loaded into Sina, know One-dimensional row vector needed for dictionary is set as the present embodiment.
For above-mentioned steps 202, by " this five one-dimensional row vectors of X1, X2, X3, X4, X5 " successively form a Two-Dimensional Moment Battle array, i.e. bivector, can be obtained the object vector.Wherein, X1-X5 respectively represents above-mentioned 1-5 row vector.It illustrates It is bright, it is assumed that X1 is [1,2,3,4,5], and X2 is [1,2,3,4,5], and X3 is [5,4,3,2,1], and X4 is [1,2,3,4,5], and X5 is [1,2,3,4,5], combination, which obtains bivector, to be expressed as
In view of the diversity of user, the target text in step 102 is likely to undesirable on format or deposits In more interference information, therefore, the present embodiment can also pre-process it before being translated into object vector, so that The target text easily facilitates vector conversion and the identification and analysis of following cycle neural network on format and content.Such as Shown in Fig. 4, further, before step 102, this method further include:
301, the specified text in the target text is deleted, the specified text includes at least stop words or punctuate accords with Number;
302, word segmentation processing is carried out to the target text deleted after specifying text, obtained each in the target text A words.
For above-mentioned steps 301, stop words mentioned here, which can be, refers to the extra high Chinese word character of frequency of use, such as " ", the Chinese character without practical language meaning such as " ", in addition, specified text can also include punctuation mark, such as comma, fullstop Deng these punctuation marks do not have practical language meaning yet.When executing step 301, server can will specify in target text Text suppression illustrates, it is assumed that the specified text includes stop words and punctuation mark, includes text " I in the target text Today comes to work.", server can be deleted first the stop words by " " therein etc. without practical significance, and will "." etc. punctuates Puncture, thus the text " I comes to work today " after being deleted.
For above-mentioned steps 302, after deleting specified text, server can also carry out at participle the target text Reason, is accepted above-mentioned text " I comes to work today ", and server can segment tool by third party and the text is carried out sentence point It cuts, is converted into " I comes to work today " four words.
103, using the object vector as input investment to preparatory trained Recognition with Recurrent Neural Network, the circulation is obtained The objective result vector of neural network output, each element in the objective result vector are respectively that each pre-set user is intended to Corresponding first probability value, the first probability value characterize the target text and belong to the probability that corresponding pre-set user is intended to;
After obtaining the corresponding object vector of the target text, server can be using the object vector as input Investment obtains the objective result vector of the Recognition with Recurrent Neural Network output, the mesh to preparatory trained Recognition with Recurrent Neural Network Each element in mark result vector is respectively that each pre-set user is intended to corresponding first probability value, wherein the first probability value It characterizes the target text and belongs to the probability that corresponding pre-set user is intended to.It is understood that one in objective result vector As contain multiple elements, each element is first probability value, these first probability values and multiple pre-set users are intended to It corresponds, and characterizes target text and belong to the probability that corresponding pre-set user is intended to.It is found that if some pre-set user is intended to Corresponding first probability value is bigger, then it is higher to illustrate that the target text belongs to the probability that the pre-set user is intended to.
For ease of understanding, the training process of Recognition with Recurrent Neural Network will be described in detail below.As shown in figure 5, into one Step ground, the Recognition with Recurrent Neural Network are trained in advance by following steps:
401, it collects respectively and belongs to art text if each pre-set user is intended to;
402, to being collected into, art text carries out vectorization processing respectively, obtain the corresponding sample of each words art text to Amount;
403, it is intended to for each pre-set user, the mark value that the pre-set user is intended to corresponding sample vector is denoted as 1, the mark value of other sample vectors is denoted as 0;
404, it is intended to for each pre-set user, is carried out all sample vectors as input investment to Recognition with Recurrent Neural Network Training, obtains sample results vector, it is corresponding that each element in each sample results vector characterizes the sample vector respectively Words art text is belonging respectively to the probability that each pre-set user is intended to;
405, it is intended to for each pre-set user, using each sample results vector of output as adjustment target, adjusts institute The parameter of Recognition with Recurrent Neural Network is stated, to minimize obtained each sample results vector mark corresponding with each sample vector Error between note value;
If the error 406, between each sample results vector mark value corresponding with each sample vector meets pre- If training termination condition, it is determined that the Recognition with Recurrent Neural Network has trained.
For above-mentioned steps 401, in the present embodiment, for practical application scene, staff can be in advance in server On set each pre-set user of training needed to be intended to, such as may include " agree to listen to ", " refusal purchase ", " be ready To " etc. be intended to, be intended to for these pre-set users, staff also needs to collect corresponding words under concrete application scene Art text, such as user convert art text if getting practical the problem of seeking advice from.When collecting words art text, server can lead to It crosses the collection of the channels such as specialized knowledge base, network data base and belongs to art text if each pre-set user is intended to.It should be noted that Each pre-set user, which is intended to art text if corresponding to, should reach certain order of magnitude, words art text between each pre-set user intention This quantity can have certain gap, but should not differ and avoid influencing the training effect to Recognition with Recurrent Neural Network too far.For example, Art text if being collected into are as follows: the quantity of art text is 1,000,000 if " agreeing to listen to " is corresponding, and " refusal purchase " is corresponding If the quantity of art text be 200,000, the quantity of art text is 300,000 if " being ready to wait " is corresponding.
For above-mentioned steps 402, it is to be understood that be trained it that will talk about art text and put into Recognition with Recurrent Neural Network Before, need will be collected into these words art texts carry out vectorization processing respectively, obtain the corresponding sample of each words art text to Amount converts the text to vector and is more convenient for the understanding and training of Recognition with Recurrent Neural Network.It should be noted that in view of being collected into Words art text source is numerous, talks about the format of art text often and disunity, this is easy to interfere subsequent training.Therefore, it takes Business device can pre-process it before these words art texts are carried out vectorization processing, including stop words, punctuation mark Deletion and words cutting.For example, it is assumed that some words art text is that " I comes to work today.", server can first by The stop words without practical significance such as " " therein is deleted, and will "." etc. punctuation marks delete, then using third party segment work The text is carried out sentence segmentation by tool, is converted into " I comes to work today " four words.After the pre-treatment, server is again to the words Each words carries out vectorization mapping in art text, and the corresponding row vector of each words in the words art text can be obtained, pass through Words each in words art text progress vectorization is obtained into multiple row vectors, it is corresponding that these row vectors form the words art text Sample vector (bivector).Specifically, sample vector can be recorded in the form of data matrix.
For above-mentioned steps 403, it is to be understood that before training, need that sample vector is marked, this implementation Due to needing to be trained for multiple pre-set users intention in example, should be intended to for different pre-set user respectively into Rower note.It illustrates, it is assumed that totally 3 pre-set users are intended to, and respectively " agreeing to listen to ", " refusal purchase " and " are ready To ", then, when for " agreeing to listen to ", the mark value of each sample vector under " the agreeing to listen to " is denoted as 1, " refusal purchase Buy " and the mark value of each sample vector under " being ready to wait " be denoted as 0, and for subsequent for following when " should agree to listen to " The training of ring neural network;When similarly, for " refusal is bought ", by the mark value of each sample vector under " refusal is bought " It is denoted as 1, the mark value of " agreeing to listen to " and each sample vector under " being ready to wait " is denoted as 0, and " should refuse for subsequent be directed to The training of Recognition with Recurrent Neural Network when purchase absolutely ";It is similarly handled for " being ready to wait " and other pre-set users intention, herein It repeats no more.
For above-mentioned steps 404, in training, it is intended to for each pre-set user, using all sample vectors as input Investment is trained to Recognition with Recurrent Neural Network, obtains sample results vector.It is understood that due to anticipating for the pre-set user Figure, the mark value of each sample vector is 1 under being intended to except the pre-set user, and other mark values are 0, and a sample vector is defeated After entering Recognition with Recurrent Neural Network, which exports the sample results vector being made of N number of element, this N number of element difference It characterizes art text if the sample vector corresponds to and is belonging respectively to the probability that N number of pre-set user is intended to.
Further, the step 404 can specifically include:
11, when each sample vector is input to Recognition with Recurrent Neural Network training, each sample vector is put into In two-way GRU (Gated Recurrent Unit) network, the output sequence of the two-way GRU network is obtained;
12, the feature that the output sequence is extracted using the convolution window of pre-set dimension, obtains convolution results;
13, pond operating result is obtained after carrying out average pondization operation and maximum pondization operation to convolution results;
14, polymerize all pond operating results and input the Recognition with Recurrent Neural Network full articulamentum obtain sample results to Amount.
For step 11, two-way GRU network is to overcome traditional RNN (Recurrent Neural Network) network (Recognition with Recurrent Neural Network) can not handle the variant of the remote LSTM (Long Short-Term Memory) for relying on and proposing One kind, two-way GRU network maintain the effect of LSTM simultaneously and make structure simpler.There are two doors for two-way GRU network, more New door is with resetting door.Wherein, the degree that door is brought into current state for controlling the status information of previous moment is updated, is updated The bigger information for illustrating that previous moment is brought into of value of door is more.Resetting door is then used to control the state letter for ignoring previous moment Breath, the smaller message for illustrating to ignore of value for resetting door are more.The calculation formula of two-way GRU network is as follows:
rt=σ (Wrxt+Urht-1)
zt=σ (Wzxt+Uzht-1)
In above-mentioned formula, Wr、Ur、Wz、UzIt is the network parameter of two-way GRU network, t represents t moment, xtWhen for moment t The term vector of input, ht-1The vector output obtained after term vector input for the last moment of t moment, htFor xtIt is obtained after input Vector output, it should be noted that the processing sequence of vector is temporally unfolded, that is, each moment corresponds to a word Vector, a sample vector can be made of multiple term vectors in order.
Above-mentioned htIt can indicate are as follows:
H=tanh (Wxt+rt U ht-1+b)
ht=(1-zt)H+zt ht-1
Wherein, W, U, b are the network parameter of two-way GRU network.
It is found that after the sample vector is put into two-way GRU network by server, the h of each moment outputtAs this is two-way The output sequence of GRU network.Assuming that a sample vector includes N number of term vector, then the output sequence by N number of moment htGroup At.
For step 12, particularly, for the output sequence of the two-way GRU network, the volume of 5*5 is can be used in server Product window carries out convolutional calculation to the output sequence, and obtained calculated result is the convolution results.
For step 13, after obtaining convolution results, convolution results can also be carried out respectively average pondization operation with most The operation of bigization pondization.Wherein, average pondization operation, which refers to use, presets meter when pond window size traverses convolution results The average value for calculating all elements in window, using the value being calculated as the result in average pond;Pondization operation is maximized to refer to It is traversed using the result that default pond window size obtains average pond, takes the maximum value of all elements in window, and Using the value of acquirement as maximum pond as a result, i.e. above-mentioned pond operating result.
For step 14, it is to be understood that N number of h that step 11 obtainstIt is right after putting into above-mentioned steps 12 and step 13 It should obtain N number of corresponding pond operating result, polymerize this N number of pond operating result at step 14, i.e., it will according to default dimension All pond operating results are stitched together, the vector after being polymerize, then the vector after polymerization is put into full articulamentum, i.e., The sequence of full articulamentum output can be obtained, the sequence exported here is above-mentioned described sample results vector.It needs to illustrate Be, the purpose of the full articulamentum in the present embodiment be by e-learning to Feature Mapping into the label space of sample, meeting Vector after polymerization is converted to an one-dimensional vector, the full articulamentum in specific calculating process and existing neural network similarly, Not reinflated description herein.
Wherein, before putting into each sample vector in two-way GRU network, occurred when training in order to prevent Fitting, the sample vector that can also be inputted according to default drop probability random drop, for example the default drop probability can be set It is 0.35, i.e., random drop 35 in every 100 sample vectors, remaining 65 are input in the two-way GRU network.On in addition, The pre-set dimension for stating convolution window can be according to actual use situation setting, for example can be the convolution window of 5*5.
For above-mentioned steps 405, it is to be understood that during training Recognition with Recurrent Neural Network, need to adjust this and follow The parameter of ring neural network.For example, the network structure of Recognition with Recurrent Neural Network mainly include circulation layer, pond layer, random deactivating layer, Regularization layer and softmax layers, are equipped with several parameters, during a sample training, by adjusting these in every layer Parameter can influence the output result of Recognition with Recurrent Neural Network.It illustrates, it is assumed that for " agreeing to listen to " this pre-set user meaning Some sample vector under " agreeing to listen to " is put into the Recognition with Recurrent Neural Network, the result of output by figure are as follows: [0.3,0.2, 0.5], art text is belonging respectively to " agreeing to listen to ", " refuses if the value of 3 elements represents sample vector correspondence in the result The probability that purchase absolutely ", " being ready to wait " 3 pre-set users are intended to, i.e. the words art text belong to the probability of " agree to listen to " and are 0.3;The probability that the words art text belongs to " refusal purchase " is 0.2;The probability that the words art text belongs to " being ready to wait " is 0.5. It by the mark value of the sample vector is 1 it is found that the words art text belongs to " agreeing to listen to ", therefore can be by adjusting the circulation The parameter of neural network, make as far as possible Recognition with Recurrent Neural Network export result be " 1,0,0 ", wherein it is most important be to try to so that The value of the element of " agreeing to listen to " corresponding in the result of output is as close possible to 1.
Execute step 405 adjustment Recognition with Recurrent Neural Network parameter when, can also by existing back-propagation algorithm into Row adjustment, not reinflated description to this.
For above-mentioned steps 406, after being performed both by completion above-mentioned steps 403-405 for each pre-set user intention, It is default to may determine that whether the error between each sample results vector mark value corresponding with each sample vector meets Training termination condition can determine this if satisfied, then illustrating that the parameters in the Recognition with Recurrent Neural Network have been adjusted to position Recognition with Recurrent Neural Network has trained completion;Conversely, if not satisfied, then illustrating that the Recognition with Recurrent Neural Network also needs to continue to train.Wherein, The training termination condition can be preset according to actual use situation, specifically, which can be set are as follows: If the error between each sample results vector mark value corresponding with each sample vector is respectively less than specification error value, Think that it meets the preset trained termination condition.Alternatively, can also be set to: art text executes if being concentrated using verifying Above-mentioned steps 402-404, if the error between the sample results vector and mark value of Recognition with Recurrent Neural Network output is in a certain range It is interior, then it is assumed that it meets the preset trained termination condition.Wherein, the collection and above-mentioned steps of art text if which concentrates 401 is similar, specifically, can execute the collection of above-mentioned steps 401 and obtain if each pre-set user is intended to collect after art text Certain proportion if obtaining in art text is divided into training set, and art text is divided into verifying collection if residue.For example, can incite somebody to action Sample of the random division 80% as the training set of subsequent trained Recognition with Recurrent Neural Network in art text if collection obtains, will be other 20% be divided into whether subsequent authentication Recognition with Recurrent Neural Network trains completion, namely whether meet testing for default training termination condition Demonstrate,prove the sample of collection.
In the present embodiment, see that the mechanism of random drop can be added when it is found that training to prevent in the above-mentioned description to step 404 Over-fitting occurs for Recognition with Recurrent Neural Network.It is different, when using Recognition with Recurrent Neural Network, in order to guarantee Recognition with Recurrent Neural Network Recognition accuracy, without using the mechanism of random drop.For ease of understanding, further, the step 103 specifically can wrap It includes: after the object vector is inputted the Recognition with Recurrent Neural Network, obtaining the output sequence of the two-way GRU network, so The feature for extracting output sequence using the convolution window afterwards, obtains convolution results, carries out average pondization operation to convolution results Pond operating result is obtained with after the operation of maximum pondization, polymerize all pond operating results and inputs the Recognition with Recurrent Neural Network Full articulamentum obtains objective result vector.
104, the highest pre-set user intention of the first probability value is determined as the corresponding target user of the target text to anticipate Figure.
It is understood that server is after the objective result vector for obtaining Recognition with Recurrent Neural Network output, due to mesh Each element is respectively that each pre-set user is intended to corresponding first probability value, and the first probability value characterizes in mark result vector The target text belongs to the probability that corresponding pre-set user is intended to, this is also meaned that, the first probability value is higher, illustrates the mesh The probability that mark text belongs to pre-set user intention is higher.Therefore, it is highest default to choose wherein the first probability value for server User is intended to be determined as the corresponding target user's intention of the target text, this has held the practical feelings of user to the full extent Condition and true intention.
In the embodiment of the present invention, firstly, obtaining the target text of intention to be identified;Then, the target text is carried out Vectorization processing, obtains object vector;Then, using the object vector as input investment to trained circulation nerve in advance Network obtains the objective result vector of the Recognition with Recurrent Neural Network output, each element difference in the objective result vector It is intended to corresponding first probability value for each pre-set user, the first probability value characterizes the target text and belongs to corresponding preset The probability that user is intended to;Finally, the highest pre-set user intention of the first probability value is determined as the corresponding mesh of the target text User is marked to be intended to.As it can be seen that the present invention can accurately be identified from target text by preparatory trained Recognition with Recurrent Neural Network The true intention of user out, not only avoids experience, there are gaps to lead to the case where identifying deviation for knowledge, and eliminates people's Subjective factor influences, and improves the accuracy of intention assessment, facilitates enterprise and holds the true intention of user and facilitate transaction.
It should be understood that the size of the serial number of each step is not meant that the order of the execution order in above-described embodiment, each process Execution sequence should be determined by its function and internal logic, the implementation process without coping with the embodiment of the present invention constitutes any limit It is fixed.
In one embodiment, a kind of user's intention assessment device is provided, user's intention assessment device and above-described embodiment Middle user's intension recognizing method corresponds.As shown in fig. 6, user's intention assessment device includes that target text obtains module 501, text vector module 502, vector input module 503 and intention determining module 504.Each functional module is described in detail such as Under:
Target text obtains module 501, for obtaining the target text of intention to be identified;
Text vector module 502 obtains object vector for carrying out vectorization processing to the target text;
Vector input module 503, for using the object vector as input investment to trained circulation nerve in advance Network obtains the objective result vector of the Recognition with Recurrent Neural Network output, each element difference in the objective result vector It is intended to corresponding first probability value for each pre-set user, the first probability value characterizes the target text and belongs to corresponding preset The probability that user is intended to;
It is intended to determining module 504, for the highest pre-set user intention of the first probability value to be determined as the target text Corresponding target user is intended to.
As shown in fig. 7, further, the Recognition with Recurrent Neural Network can be by being trained in advance with lower module:
Art text collection module 505 is talked about, belongs to art text if each pre-set user is intended to for collecting respectively;
Art text vector module 506 is talked about, carries out vectorization processing respectively for the art text to being collected into, is obtained each The corresponding sample vector of a words art text;
The pre-set user is intended to corresponding sample for being intended to for each pre-set user by sample labeling module 507 The mark value of vector is denoted as 1, and the mark value of other sample vectors is denoted as 0;
Sample input module 508, for being intended to for each pre-set user, extremely using all sample vectors as input investment Recognition with Recurrent Neural Network is trained, and obtains sample results vector, and each element in each sample results vector characterizes institute respectively It states art text if sample vector corresponds to and is belonging respectively to the probability that each pre-set user is intended to;
Network parameter adjusts module 509, for being intended to for each pre-set user, with each sample results vector of output As adjustment target, adjust the parameter of the Recognition with Recurrent Neural Network, with minimize obtained each sample results vector with Error between the corresponding mark value of each sample vector;
Determining module 510 is completed in training, if being used for each sample results vector mark corresponding with each sample vector Error between note value meets preset trained termination condition, it is determined that the Recognition with Recurrent Neural Network has trained.
As shown in figure 8, further, the sample input module 508 may include:
Training unit 5081 will be described each when being input to Recognition with Recurrent Neural Network training for each sample vector Sample vector is put into two-way GRU network, and the output sequence of the two-way GRU network is obtained, then using pre-set dimension Convolution window extracts the feature of the output sequence, obtains convolution results, carries out average pondization operation and maximum to convolution results Pond operating result is obtained after pondization operation, polymerize all pond operating results and inputs the full connection of the Recognition with Recurrent Neural Network Layer obtains sample results vector.
Further, the vector input module may include:
Recognition unit, for obtaining the two-way GRU after the object vector is inputted the Recognition with Recurrent Neural Network Then the output sequence of network is extracted the feature of output sequence using the convolution window, convolution results is obtained, to convolution results Pond operating result is obtained after carrying out average pondization operation and maximum pondization operation, polymerize all pond operating results and inputs institute The full articulamentum for stating Recognition with Recurrent Neural Network obtains objective result vector.
Further, the text vector module may include:
Row vector conversion unit, for converting each one for words each in the target text using preset dictionary Row vector is tieed up, the dictionary has recorded the corresponding relationship between words and each one-dimensional row vector;
Bivector component units, for the order according to each words in the target text by each one-dimensional row Vector forms a bivector as object vector.
Specific about user's intention assessment device limits the limit that may refer to above for user's intension recognizing method Fixed, details are not described herein.Modules in above-mentioned user's intention assessment device can fully or partially through software, hardware and its Combination is to realize.Above-mentioned each module can be embedded in the form of hardware or independently of in the processor in computer equipment, can also be with It is stored in the memory in computer equipment in a software form, in order to which processor calls the above modules of execution corresponding Operation.
In one embodiment, a kind of computer equipment is provided, which can be server, internal junction Composition can be as shown in Figure 9.The computer equipment include by system bus connect processor, memory, network interface and Database.Wherein, the processor of the computer equipment is for providing calculating and control ability.The memory packet of the computer equipment Include non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system, computer program and data Library.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The calculating The database of machine equipment is for storing the data being related in user's intension recognizing method.The network interface of the computer equipment is used It is communicated in passing through network connection with external terminal.To realize that a kind of user is intended to know when the computer program is executed by processor Other method.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory And the computer program that can be run on a processor, processor realize that user is intended in above-described embodiment when executing computer program The step of recognition methods, such as step 101 shown in Fig. 2 is to step 104.Alternatively, processor is realized when executing computer program The function of each module/unit of user's intention assessment device in above-described embodiment, such as module 501 shown in Fig. 6 is to module 504 Function.To avoid repeating, which is not described herein again.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated Machine program realizes the step of user's intension recognizing method in above-described embodiment, such as step shown in Fig. 2 when being executed by processor 101 to step 104.Alternatively, realizing user's intention assessment device in above-described embodiment when computer program is executed by processor The function of each module/unit, such as module 501 shown in Fig. 6 is to the function of module 504.To avoid repeating, which is not described herein again.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein, To any reference of memory, storage, database or other media used in each embodiment provided herein, Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM (PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms, Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
It is apparent to those skilled in the art that for convenience of description and succinctly, only with above-mentioned each function Can unit, module division progress for example, in practical application, can according to need and by above-mentioned function distribution by different Functional unit, module are completed, i.e., the internal structure of described device is divided into different functional unit or module, more than completing The all or part of function of description.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although referring to aforementioned reality Applying example, invention is explained in detail, those skilled in the art should understand that: it still can be to aforementioned each Technical solution documented by embodiment is modified or equivalent replacement of some of the technical features;And these are modified Or replacement, the spirit and scope for technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution should all It is included within protection scope of the present invention.

Claims (10)

1. a kind of user's intension recognizing method characterized by comprising
Obtain the target text of intention to be identified;
Vectorization processing is carried out to the target text, obtains object vector;
Using the object vector as input investment to preparatory trained Recognition with Recurrent Neural Network, the Recognition with Recurrent Neural Network is obtained The objective result vector of output, each element in the objective result vector are respectively that each pre-set user is intended to corresponding the One probability value, the first probability value characterize the target text and belong to the probability that corresponding pre-set user is intended to;
The highest pre-set user intention of first probability value is determined as the corresponding target user of the target text to be intended to.
2. user's intension recognizing method according to claim 1, which is characterized in that the Recognition with Recurrent Neural Network passes through following Step trains in advance:
It collects respectively and belongs to art text if each pre-set user is intended to;
To being collected into, art text carries out vectorization processing respectively, obtains the corresponding sample vector of each words art text;
It is intended to for each pre-set user, the mark value that the pre-set user is intended to corresponding sample vector is denoted as 1, other samples The mark value of this vector is denoted as 0;
It is intended to for each pre-set user, is trained, obtains to Recognition with Recurrent Neural Network using all sample vectors as input investment To sample results vector, each element in each sample results vector characterize respectively the sample vector it is corresponding if art text It is belonging respectively to the probability that each pre-set user is intended to;
It is intended to for each pre-set user, using each sample results vector of output as adjustment target, adjusts the circulation mind Parameter through network, to minimize between obtained each sample results vector mark value corresponding with each sample vector Error;
If the error between each sample results vector mark value corresponding with each sample vector meets preset training Termination condition, it is determined that the Recognition with Recurrent Neural Network has trained.
3. user's intension recognizing method according to claim 2, which is characterized in that described to anticipate for each pre-set user Figure is trained using all sample vectors as input investment to Recognition with Recurrent Neural Network, and obtaining sample results vector includes:
When each sample vector is input to Recognition with Recurrent Neural Network training, each sample vector is put into two-way GRU In network, the output sequence of the two-way GRU network is obtained, then extracts the output sequence using the convolution window of pre-set dimension The feature of column, obtains convolution results, obtains pondization operation after carrying out average pondization operation and maximum pondization operation to convolution results Sample results vector is obtained as a result, polymerizeing all pond operating results and inputting the full articulamentum of the Recognition with Recurrent Neural Network.
4. user's intension recognizing method according to claim 3, which is characterized in that described using the object vector as defeated Enter investment to preparatory trained Recognition with Recurrent Neural Network, the objective result vector for obtaining the Recognition with Recurrent Neural Network output includes:
After the object vector is inputted the Recognition with Recurrent Neural Network, the output sequence of the two-way GRU network is obtained, so The feature for extracting output sequence using the convolution window afterwards, obtains convolution results, carries out average pondization operation to convolution results Pond operating result is obtained with after the operation of maximum pondization, polymerize all pond operating results and inputs the Recognition with Recurrent Neural Network Full articulamentum obtains objective result vector.
5. user's intension recognizing method according to any one of claim 1 to 4, which is characterized in that described to the mesh It marks text and carries out vectorization processing, obtaining object vector includes:
Each one-dimensional row vector is converted for words each in the target text using preset dictionary, the dictionary has recorded Corresponding relationship between words and each one-dimensional row vector;
According to each words in the target text order using each one-dimensional row vector form a bivector as Object vector.
6. a kind of user's intention assessment device characterized by comprising
Target text obtains module, for obtaining the target text of intention to be identified;
Text vector module obtains object vector for carrying out vectorization processing to the target text;
Vector input module, for obtaining using the object vector as input investment to preparatory trained Recognition with Recurrent Neural Network The objective result vector exported to the Recognition with Recurrent Neural Network, each element in the objective result vector is respectively each pre- If user is intended to corresponding first probability value, the first probability value characterizes the target text and belongs to corresponding pre-set user intention Probability;
It is intended to determining module, for the highest pre-set user intention of the first probability value to be determined as the corresponding mesh of the target text User is marked to be intended to.
7. user's intention assessment device according to claim 6, which is characterized in that the Recognition with Recurrent Neural Network passes through following Module trains in advance:
Art text collection module is talked about, belongs to art text if each pre-set user is intended to for collecting respectively;
Art text vector module is talked about, vectorization processing is carried out respectively for the art text to being collected into, obtains each words art The corresponding sample vector of text;
The pre-set user is intended to corresponding sample vector for being intended to for each pre-set user by sample labeling module Mark value is denoted as 1, and the mark value of other sample vectors is denoted as 0;
Sample input module, for being put into using all sample vectors as input refreshing to recycling for each pre-set user intention It is trained through network, obtains sample results vector, each element in each sample results vector characterizes the sample respectively Art text is belonging respectively to the probability that each pre-set user is intended to if vector is corresponding;
Network parameter adjusts module, for being intended to for each pre-set user, using each sample results vector of output as tune Whole target adjusts the parameter of the Recognition with Recurrent Neural Network, to minimize obtained each sample results vector and each sample Error between the corresponding mark value of this vector;
Determining module is completed in training, if being used between each sample results vector mark value corresponding with each sample vector Error meet preset trained termination condition, it is determined that the Recognition with Recurrent Neural Network has trained.
8. user's intention assessment device according to claim 7, which is characterized in that the sample input module includes:
Training unit, when being input to Recognition with Recurrent Neural Network training for each sample vector, by each sample vector It puts into two-way GRU network, obtains the output sequence of the two-way GRU network, then use the convolution window of pre-set dimension The feature for extracting the output sequence, obtains convolution results, carries out average pondization operation and maximum pondization operation to convolution results After obtain pond operating result, polymerize all pond operating results and input the full articulamentum of the Recognition with Recurrent Neural Network and obtain sample This result vector.
9. a kind of computer equipment, including memory, processor and storage are in the memory and can be in the processor The computer program of upper operation, which is characterized in that the processor realized when executing the computer program as claim 1 to User's intension recognizing method described in any one of 5.
10. a kind of computer readable storage medium, the computer-readable recording medium storage has computer program, and feature exists In realization user intention assessment side as described in any one of claims 1 to 5 when the computer program is executed by processor Method.
CN201910008318.7A 2019-01-04 2019-01-04 A kind of user's intension recognizing method, device, computer equipment and storage medium Withdrawn CN109858022A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910008318.7A CN109858022A (en) 2019-01-04 2019-01-04 A kind of user's intension recognizing method, device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910008318.7A CN109858022A (en) 2019-01-04 2019-01-04 A kind of user's intension recognizing method, device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN109858022A true CN109858022A (en) 2019-06-07

Family

ID=66893966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910008318.7A Withdrawn CN109858022A (en) 2019-01-04 2019-01-04 A kind of user's intension recognizing method, device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109858022A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390108A (en) * 2019-07-29 2019-10-29 中国工商银行股份有限公司 Task exchange method and system based on deeply study
CN110503143A (en) * 2019-08-14 2019-11-26 平安科技(深圳)有限公司 Research on threshold selection, equipment, storage medium and device based on intention assessment
CN110866592A (en) * 2019-10-28 2020-03-06 腾讯科技(深圳)有限公司 Model training method and device, energy efficiency prediction method and device and storage medium
CN111414467A (en) * 2020-03-20 2020-07-14 中国建设银行股份有限公司 Question-answer dialogue method and device, electronic equipment and computer readable storage medium
CN112115702A (en) * 2020-09-15 2020-12-22 北京明略昭辉科技有限公司 Intention recognition method, device, dialogue robot and computer readable storage medium
CN113609851A (en) * 2021-07-09 2021-11-05 浙江连信科技有限公司 Psychological idea cognitive deviation identification method and device and electronic equipment
CN115423485A (en) * 2022-11-03 2022-12-02 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092623A1 (en) * 2015-11-30 2017-06-08 北京国双科技有限公司 Method and device for representing text as vector
CN107346340A (en) * 2017-07-04 2017-11-14 北京奇艺世纪科技有限公司 A kind of user view recognition methods and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017092623A1 (en) * 2015-11-30 2017-06-08 北京国双科技有限公司 Method and device for representing text as vector
CN107346340A (en) * 2017-07-04 2017-11-14 北京奇艺世纪科技有限公司 A kind of user view recognition methods and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110390108A (en) * 2019-07-29 2019-10-29 中国工商银行股份有限公司 Task exchange method and system based on deeply study
CN110390108B (en) * 2019-07-29 2023-11-21 中国工商银行股份有限公司 Task type interaction method and system based on deep reinforcement learning
CN110503143A (en) * 2019-08-14 2019-11-26 平安科技(深圳)有限公司 Research on threshold selection, equipment, storage medium and device based on intention assessment
CN110503143B (en) * 2019-08-14 2024-03-19 平安科技(深圳)有限公司 Threshold selection method, device, storage medium and device based on intention recognition
CN110866592A (en) * 2019-10-28 2020-03-06 腾讯科技(深圳)有限公司 Model training method and device, energy efficiency prediction method and device and storage medium
CN110866592B (en) * 2019-10-28 2023-12-29 腾讯科技(深圳)有限公司 Model training method, device, energy efficiency prediction method, device and storage medium
CN111414467A (en) * 2020-03-20 2020-07-14 中国建设银行股份有限公司 Question-answer dialogue method and device, electronic equipment and computer readable storage medium
CN112115702A (en) * 2020-09-15 2020-12-22 北京明略昭辉科技有限公司 Intention recognition method, device, dialogue robot and computer readable storage medium
CN113609851A (en) * 2021-07-09 2021-11-05 浙江连信科技有限公司 Psychological idea cognitive deviation identification method and device and electronic equipment
CN115423485A (en) * 2022-11-03 2022-12-02 支付宝(杭州)信息技术有限公司 Data processing method, device and equipment

Similar Documents

Publication Publication Date Title
CN109858022A (en) A kind of user's intension recognizing method, device, computer equipment and storage medium
CN109829153A (en) Intension recognizing method, device, equipment and medium based on convolutional neural networks
CN110334201B (en) Intention identification method, device and system
CN109829038A (en) Question and answer feedback method, device, equipment and storage medium based on deep learning
CN109783617A (en) For replying model training method, device, equipment and the storage medium of problem
Shen et al. Customizing student networks from heterogeneous teachers via adaptive knowledge amalgamation
CN109697228A (en) Intelligent answer method, apparatus, computer equipment and storage medium
CN112036154B (en) Electronic medical record generation method and device based on inquiry dialogue and computer equipment
CN110866542B (en) Depth representation learning method based on feature controllable fusion
CN110021439A (en) Medical data classification method, device and computer equipment based on machine learning
Yu et al. Compositional attention networks with two-stream fusion for video question answering
CN110489550A (en) File classification method, device and computer equipment based on combination neural net
CN111783993A (en) Intelligent labeling method and device, intelligent platform and storage medium
CN110110611A (en) Portrait attribute model construction method, device, computer equipment and storage medium
CN110705490B (en) Visual emotion recognition method
CN106326857A (en) Gender identification method and gender identification device based on face image
CN109711356B (en) Expression recognition method and system
CN110222184A (en) A kind of emotion information recognition methods of text and relevant apparatus
CN110019736A (en) Question and answer matching process, system, equipment and storage medium based on language model
CN109614627A (en) A kind of text punctuate prediction technique, device, computer equipment and storage medium
CN117762499B (en) Task instruction construction method and task processing method
Phan et al. Consensus-based sequence training for video captioning
CN109886110A (en) Micro- expression methods of marking, device, computer equipment and storage medium
CN107967258A (en) The sentiment analysis method and system of text message
CN109977394A (en) Text model training method, text analyzing method, apparatus, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20190607

WW01 Invention patent application withdrawn after publication