Summary of the invention
Embodiment of the present invention is designed to provide a kind of dialogue generation method, device, equipment based on artificial intelligence
And storage medium, so that the output accuracy of chat robots improves.
In order to solve the above technical problems, embodiments of the present invention provide a kind of dialogue generation side based on artificial intelligence
Method comprising the steps of: obtain to revert statement, retrieval type model will be inputted to revert statement, wherein retrieval type model is used for
The K candidate reply responded to revert statement is filtered out from preset dialogue corpus, K is positive integer;Obtain retrieval type mould
K of type output is candidate to be replied, and will utilize production model to revert statement and K candidate reply input production model
Prediction word is filtered out according to the inverse document frequency to each word in revert statement, K candidate reply and dictionary, and exports and utilizes prediction
The prediction of word composition is replied;It is replied according to prediction and obtains revert statement.
Embodiments of the present invention additionally provide a kind of dialogue generating means based on artificial intelligence, include: candidate replys
Retrieval module will input retrieval type model to revert statement for obtaining to revert statement, wherein retrieval type model be used for from
The K candidate reply responded to revert statement is filtered out in preset dialogue corpus, K is positive integer;Prediction, which is replied, obtains mould
Block, the K candidate reply for obtaining the output of retrieval type model, and will be to revert statement and K candidate reply input production
Model, wherein production model is filtered out according to the inverse document frequency to each word in revert statement, K candidate reply and dictionary
It predicts word, and exports and replied using the prediction of prediction word composition;Revert statement obtains module, obtains back for being replied according to prediction
Multiple sentence.
Embodiments of the present invention additionally provide a kind of network equipment, comprising: at least one processor;And at least
The memory of one processor communication connection;Wherein, memory is stored with the instruction that can be executed by least one processor, instruction
It is executed by least one processor, so that at least one processor is able to carry out the above-mentioned dialogue generation side based on artificial intelligence
Method.
Embodiments of the present invention additionally provide a kind of computer readable storage medium, are stored with computer program, calculate
Machine program realizes the above-mentioned dialogue generation method based on artificial intelligence when being executed by processor.
Embodiment of the present invention in terms of existing technologies, can be with by that will input retrieval type model to revert statement
K candidate reply is obtained according to retrieval type model;Production model will be inputted to candidate reply of revert statement and K, can made
The K candidate information replied obtained to revert statement and retrieval can be generated the utilization of formula model, to make production mould
The output of type more standardizes;Further, production model screening predict word when in conjunction with word each in dictionary inverse document frequency,
High frequency words can be reduced as the probability replied, the generation of omnipotent answer is reduced, so that the output accuracy of chat robots be made to mention
It is high.
In addition, after will be to revert statement and K candidate reply input production model, further includes: treat reply language
Sentence and K candidate reply are encoded, and are obtained K candidate of vector sum to be replied and are replied vector.It is waited according to vector sum K to be replied
Choosing replys vector and obtains context vector;Based on context vector sum inverse document frequency calculates the comprehensive of each word in dictionary
Point, and the highest word of comprehensive score is obtained as prediction word;It obtains and is replied according to the prediction of prediction word composition.It is replied by treating
Candidate reply of sentence and K carries out being encoded into vector, obtains context vector according to result after coding, using context vector as
The input of production solution to model code device can make raw forming model sufficiently learn retrieval type model with the input of optimal decoder
The candidate information replied of retrieval, and to revert statement and the K candidate expression way replied, to make the defeated of production model
It more standardizes out, precision is higher.
It is encoded in addition, treating candidate reply of revert statement and K, comprising: reply language is treated using same encoder
Sentence and K candidate reply are encoded.By the way that the volume of the same production model will be inputted to revert statement and K candidate reply
Code device can be such that encoder sufficiently learns to revert statement and the K candidate expression way replied, to optimize production model
The precision of output keeps the model generalization ability of encoder model stronger.
In addition, obtaining context vector according to vector sum K to be replied candidate vector of replying, comprising: will vector be replied
Splice with K candidate reply after vector maps to different vector spaces, context vector is obtained according to the result of splicing.Pass through
Vector sum K candidate vector of replying to be replied is mapped into different vector spaces, production model can be made to distinguish language to be replied
Sentence and K candidate reply conveyed information;Context vector is obtained according to the result of splicing, makes production model using up and down
The information and expression way of literary vector generate the revert statement of dialogue, improve the output accuracy of production model.
In addition, production model includes the first parameter matrix and the second parameter matrix;K candidate of vector sum to be replied is returned
Complex vector is spliced after mapping to different vector spaces, obtains context vector according to the result of splicing, comprising: will be wait reply to
Amount is multiplied with the first parameter matrix, obtains transformed vector to be replied;By K candidate reply vector and the second parameter matrix phase
Multiply, K candidate reply vector after being converted;Transformed vector to be replied and transformed K candidate reply are spliced, obtained
To context vector.By will vector sum be replied K it is candidate reply vector respectively with the first parameter matrix and the second parameter square
Battle array multiplication can reduce the weight of meaningless answer in K candidate reply, improve to have in revert statement and K candidate reply
The weight of meaning answer optimizes the input of decoder in production model, to optimize the output of production model.
In addition, based on context vector sum inverse document frequency calculate dictionary in each word comprehensive score, comprising: according to
Lower first calculation formula calculates the comprehensive score of each word:
P(yt|yt-1, q, r) and=α * softmax_score (w)+β * idf (w);
Wherein, P (yt|yt-1, q, r) be each word comprehensive score, ytIt is the prediction word of t moment, q is to revert statement, r
It is replied for candidate, α and β are the parameter for presetting production model, and idf (w) is the inverse document frequency of each word, softmax_
Score (w) is the normalization exponential function value of each word, is calculated with following second calculation formula:
Wherein,For the output of the hidden layer of t moment production model, CinputFor context vector.
Revert statement is obtained in addition, replying according to prediction, comprising: will predict to reply and K candidate reply inputs default point
In class model, the result of default disaggregated model output is obtained as revert statement.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached drawing to each reality of the invention
The mode of applying is explained in detail.However, it will be understood by those skilled in the art that in each embodiment of the present invention,
In order to make the reader understand this application better, many technical details are proposed.But even if without these technical details and base
In the various changes and modifications of following embodiment, the application technical solution claimed also may be implemented.
The dialogue generation method based on artificial intelligence that the first embodiment of the present invention is related to a kind of.By obtaining wait reply
Sentence will input retrieval type model to revert statement, wherein retrieval type model from preset dialogue corpus for filtering out
The K candidate reply to revert statement is responded, K is positive integer;Obtain K candidate reply of retrieval type model output, and will be to
Revert statement and K candidate reply input production model, are replied using production model according to revert statement, K candidate
And the inverse document frequency of each word filters out prediction word in dictionary, and exports and replied using the prediction of prediction word composition;According to prediction
It replys and obtains revert statement.Since candidate reply that retrieval type model obtains is input to production model, pass through retrieval
The obtained candidate information replied can be generated the utilization of formula model, realize the combination of retrieval type model and production model,
Optimize the output of production model;Also, it, can be with since production model is to screen prediction word according to the inverse document frequency of word
High frequency words are reduced as the probability replied, therefore the generation of omnipotent answer can be reduced, to improve the defeated of chat robots
Precision out.
It should be noted that the specific executing subject of present embodiment can be server-side, or in specific product
Chip, the e.g. chip in chat robots.It is illustrated by taking server-side as an example below.
The flow diagram for the dialogue generation method based on artificial intelligence that present embodiment provides is as shown in Figure 1, specific
The following steps are included:
S101: obtaining to revert statement, will input retrieval type model to revert statement, wherein retrieval type model be used for from
The K candidate reply responded to revert statement is filtered out in preset dialogue corpus, K is positive integer.
S102: K candidate reply of retrieval type model output is obtained, and input will be replied to revert statement and K candidate
Production model, wherein production model is according to the inverse document frequency to each word in revert statement, K candidate reply and dictionary
Prediction word is filtered out, and exports and is replied using the prediction of prediction word composition.
S103: it is replied according to prediction and obtains revert statement.
Wherein, preset dialogue corpus can be pre-stored in database, can be by multiple dialogues or question and answer group structure
At.Retrieval type model can be carried out after obtaining these features by machine learning algorithm by the feature of extraction problem and reply
It is obtained after training.Production model can carry out neural network model by the dialogue or question and answer group of preset dialogue corpus
It is obtained after training.In present embodiment, training that the machine learning algorithm and production model use to retrieval type model are used
Neural network model is not particularly limited.
Wherein, to revert statement refer to the corresponding sentence of reply that generates, may include question sentence and non-question sentence, i.e., to
The form of revert statement not necessarily question sentence can be any one sentence.It optionally, can be by user in visitor to revert statement
The input of family end, then server-side is sent to by client, server-side can be got to revert statement.Form to revert statement can
Think textual form or speech form, optionally, when revert statement is speech form, client or server-side turn voice
Text is turned to, to input retrieval type mode and subsequent calculating.Candidate's reply refers to that retrieval type filters out and responds wait reply
The revert statement of sentence, quantity are K, and K is positive integer, and the number of K can be configured according to the actual situation, not do have here
Body limitation.The revert statement for the prediction word composition for referring to that production model generates is replied in prediction, it will be understood that it is one that prediction, which is replied,
It is a.
Wherein, dictionary can be generated by preset dialogue corpus, i.e., dictionary can be by institute in preset dialogue corpus
By including unduplicated word form.The also known as anti-text of inverse document frequency (inverse document frequency, abbreviation IDF)
Shelves frequency, is the inverse of document frequency.Its general calculation formula are as follows:
In the present embodiment, since preset dialogue corpus is sentence, it is each in dictionary in present embodiment
The calculation formula of the inverse document frequency of word are as follows:
It is understood that the neural network model in production model is generally by the normalization index letter for calculating word
Numerical value (softmax value) predicts word to filter out, i.e., using the maximum word of softmax value as prediction word.In present embodiment
In, it is alternatively possible to be calculated after assigning the softmax value weight coefficient different with the IDF value of word respectively, by calculated result
It is maximum to be worth corresponding word as prediction word.Due to filtering out prediction word in the inverse document frequency of each word of production models coupling,
Therefore high frequency words can be reduced as the probability replied, to reduce as these omnipotent answers such as " I does not know ", " heartily "
It generates, so that the revert statement for exporting chat robots is more reasonable.
Specifically, server-side will input in trained retrieval type model to revert statement, and retrieval type model is from preset
The K candidate reply responded to revert statement is filtered out in dialogue corpus;Server-side obtains K time of retrieval type model output
Choosing is replied, and K candidate reply is inputted in trained production model with to revert statement, using production model foundation to
The inverse document frequency of each word filters out prediction word one by one in revert statement, K candidate reply and dictionary, then these are predicted phrase
It is replied at prediction.It is appreciated that the K candidate information replied due to being come out to revert statement and retrieval type model index
It is generated the utilization of formula model, thus the normalization that the prediction for improving the output of production model is replied.It is alternatively possible to will be wait reply
Sentence and the K candidate input for replying one matrix of composition as neural network model in production model.Optionally, production
Model may include encoder (encoder model) and decoder (decoder model), and first passing through encoder will language be replied
Sentence and K candidate reply are encoded to vector form, and recomposition Input matrix is into decoder.The pre- survey time is generated in production model
After multiple, the prediction that server-side obtains the output of production model is replied, and is replied further according to prediction and is obtained revert statement.Optionally, it takes
Business end obtains prediction and replies as revert statement, is output to client.It is alternatively possible to convert language from text for revert statement
Sound, then in the form of speech export revert statement to client.
Compared with prior art, present embodiment inputs retrieval type model, Ke Yigen to revert statement by what be will acquire
K candidate reply is obtained according to retrieval type model;Input production model will be replied to revert statement and K are candidate, can make to
The K candidate information replied that revert statement and retrieval obtain can be generated the utilization of formula model, to make production model
Output more standardize;Further, production model screening predict word when in conjunction with word each in dictionary inverse document frequency, can
To reduce high frequency words as the probability replied, the generation of omnipotent answer is reduced, so that the output accuracy of chat robots be made to improve.
In a specific example, as shown in Fig. 2, it is raw for the dialogue based on artificial intelligence that present embodiment provides
At another flow diagram of method, specifically includes the following steps:
S101: obtaining to revert statement, will input retrieval type model to revert statement, wherein retrieval type model be used for from
The K candidate reply responded to revert statement is filtered out in preset dialogue corpus, K is positive integer.
S1021: K candidate reply of retrieval type model output is obtained, and input will be replied to revert statement and K candidate
Production model.
S1022: treating revert statement and K candidate reply is encoded, and obtains K candidate of vector sum to be replied and replies to
Amount.
S1023: context vector is obtained according to vector sum K to be replied candidate vector of replying.
S1024: based on context vector sum inverse document frequency calculates the comprehensive score of each word in dictionary, and obtains synthesis
The word of highest scoring is as prediction word.
S1025: it obtains and is replied according to the prediction of prediction word composition.
S103: it is replied according to prediction and obtains revert statement.
Wherein, S101, S103 and S1021 are same as described above, and which is not described herein again.
In S1022, optionally, production model includes encoder, and server-side treats revert statement and K by encoder
A candidate reply is encoded, and will obtain vector to be replied after revert statement coding, will obtain K after K candidate reply coding
A candidate reply vector.Wherein, the model of encoder can not be done here using models such as LSTM, GRU or Transformer
Concrete restriction.
It is understood that production model can treat revert statement and K candidate reply by different encoders
It is encoded.Optionally, candidate reply of revert statement and K is treated to carry out coding and refer to treat reply using one and same coding device
Sentence and K candidate reply are encoded.
By that will can make to encode to revert statement and the K candidate encoder for replying the same production model of input
Device sufficiently learns to revert statement and the K candidate expression way replied, to optimize the precision of production model output.It can be with
Understand, encoder can be used same mode in training and be trained.During encoder use, with access times
Increase, the model in encoder is also constantly learning, and the model generalization ability of encoder model can be made stronger.
In S1023, context vector is obtained according to vector sum to be replied K candidate vector of replying, specifically may is that by
Splice wait reply vector sum K candidate reply after vector maps to different vector spaces, is obtained up and down according to the result of splicing
Literary vector.
Specifically, vector sum to be replied K candidate vector of replying is mapped into different vector spaces, it can will be wait reply
Vector sum K candidate reply is realized multiplied by different parameter matrixs respectively.Optionally, after different vector spaces being mapped to
Vector sum to be replied K it is candidate reply splicing, can be by after mapping wait reply vector respectively and after each mapping
Candidate replys splicing, and the result spliced further according to K forms a matrix, using the matrix as context vector;It is also possible to
A vector directly will be spliced to form with candidate reply after K mapping wait reply vector after mapping, using the vector as upper
Below vector.
By the way that vector sum K candidate vector of replying to be replied is mapped to different vector spaces, production model can be made
Distinguish the information conveyed to revert statement and K candidate reply;Context vector is obtained according to the result of splicing, makes production
Model generates dialogue using the information and expression way of context vector and replys, and improves the output accuracy of production model.
Optionally, production model includes the first parameter matrix and the second parameter matrix.Wherein, the first parameter matrix and
Two parameter matrixs obtain after being trained by existing question and answer group in preset dialogue corpus.It will vector sum K be replied
A candidate reply after vector maps to different vector spaces is spliced, and is obtained context vector according to the result of splicing, specifically may be used
With are as follows: vector to be replied is multiplied with the first parameter matrix, obtains transformed vector to be replied;By K candidate reply vector
It is multiplied with the second parameter matrix, obtains transformed K candidate reply vector;Again by transformed after replying vector and transformation
K it is candidate reply splicing, obtain context vector.
Wherein, transformed vector to be replied and transformed K candidate reply are spliced, is referred to transformed wait return
Complex vector and it is each transformed K it is candidate reply splicing after, according to K splicing result formation matrix, using the matrix as upper
Below vector.Optionally, production model further includes decoder, and context vector is input in decoder, and decoder is according to upper
Below vector and the neural network model of use obtain the output of production model.Wherein decoder can be LSTM, GRU,
The models such as Transformer.
By will vector sum be replied K it is candidate reply vector respectively with the first parameter matrix and the second parameter matrix phase
Multiply, the weight of meaningless answer in K candidate reply can be reduced, improves to significant in revert statement and K candidate reply
The weight of answer optimizes the input of decoder in production model, to optimize the output of production model.
In S1024, based on context vector calculates softmax value to server-side, and it is corresponding to assign context vector respectively
The softmax value weight coefficient different with the inverse document frequency of word each in dictionary;Again by the softmax value and inverse document frequency
Respectively multiplied by respective weight coefficient, the comprehensive score of each word is obtained;The highest word of comprehensive score is chosen as prediction word.
It is alternatively possible to calculate the comprehensive score of each word with following first calculation formula:
P(yt|yt-1, q, r) and=α * softmax_score (w)+β * idf (w);
Wherein, P (ytYyt-1, q, r) be each word comprehensive score, ytIt is the prediction word of t moment, q is to revert statement, r
It is replied for candidate, α and β are the parameter for presetting production model, i.e., above-mentioned weight coefficient, idf (w) is the inverse document of each word
Frequency, softmax_score (w) are the normalization exponential function value of each word, are calculated with following second calculation formula:
Wherein,For the output of the hidden layer of t moment production model, CinputFor context vector.
It is appreciated that production model is generated by word, server-side is replied according to the available prediction of all prediction words,
It is replied further according to prediction and obtains revert statement.Optionally, prediction can be replied as revert statement by server-side.
It carries out being encoded into vector by treating revert statement and K candidate reply, context is obtained according to result after coding
Vector can make to generate pattern using context vector as the input of production solution to model code device with the input of optimal decoder
Type sufficiently learns the candidate information replied of retrieval type model index, and the expression way replied to revert statement and K candidate,
To make the output of production model more standardize, precision is higher.
In a specific example, in S103, being replied according to prediction and obtaining conversation sentence may include: by the pre- survey time
Multiple and K candidate reply inputs in default disaggregated model, obtains the result of default disaggregated model output as revert statement.
Wherein, default disaggregated model can be decision tree, support vector machines or random forest scheduling algorithm model.Preferably,
Default disaggregated model is xgboost model.
Specifically, the prediction that server-side obtains production model is replied and K candidate reply forms a bigger time
Select answer set, these answers be input in default disaggregated model, the result that default disaggregated model is obtained as final result,
That is revert statement.
Prediction is replied by default disaggregated model and K candidate reply makees further screening, revert statement can be made
Precision is higher, improves the output accuracy of chat robots.
Referring to FIG. 3, it is that the concrete principle for the dialogue generation method based on artificial intelligence that present embodiment provides is shown
Example diagram.In figure, Retrieval model is retrieval type model, and Encoder model is encoder model in production model,
Decoder model is the decoder model in production model, and Word idf refers to the inverse document frequency of word.Below with a tool
The example of body is illustrated:
Using problem vector cosine value as retrieval type model, two-way GRU is as generating models encoder and decoder
Model is illustrated as example, and detailed process is as follows:
(1) user's input is denoted as q to revert statement, is encoded to vectorIt will be all in preset dialogue corpus
Question and answer to the problems in be denoted as Qi(i=1,2 ..., n), n indicate the number of question and answer pair, are encoded to vectorSuch as
Under:
Here sentence_encoding model can use term vector additive model, it may be assumed that
Wherein, s indicates to need to be encoded to the sentence of vector, as to revert statement and all problems, w is indicated in s in (1)
Word, word_embedding can be using the pre-training model such as word2vec.
(2) it asks respectivelyWithCosine value, the corresponding answer of the maximum preceding K problem of selective value makees
It is replied for candidate, is denoted as { r1,r2,…,rk};
(3) word counted in preset dialogue corpus generates dictionary, and counts the idf value of each word in dictionary;
(4) by q and { r1,r2,…,rkIt is respectively fed to the same two-way GRU model, obtain corresponding sentence vector hqWith
Wherein,To GRU model to the coding result of q before indicating,To GRU model to q's after expression
Coding result.
(5) space conversion is carried out to acquired results in (4) with Wq (the first parameter matrix) and Wr (the second parameter matrix), obtained
To final q vector sum r vector, and spliced, formula is as follows:
Cinput=[vq,v1,v2,...,vk];
CinputThe context vector obtained jointly by user query q and retrieval type model result r is indicated, as decoder
One of input.
(6) context vector that decoder is provided according to (5) generates response.Calculation is as follows:
P(yt|yt-1, q, r) and=α * softmax_score (w)+β * idf (w);
Particularly,
Wherein yinit, be random initialization vector, α and β are model parameter, indicate the weight of softmax and idf value, α and
β needs learn during model training.
Specifically, the upper word y gone out according to the context vector of (5) calculating and model predictiont-1And GRU
A upper timestamp hidden layer outputCalculate the hidden layer result of current time stampAnd calculate each word in dictionary
Sotftmax value, obtain the softmax_score of each word;Then calculated together with the idf value in (3) one it is new
Point, acquired results are the final score of each word, select score highest as current predictive word yt.This prediction word will and its
All prediction results of front and context vector in (5) sequentially generate one together as the input predicted next time
Complete sentence, as an output for generating model.
(7) by the output of production model in the search result of retrieval type model in (2) and (6), group is combined into one more together
Then big candidate answers collection is screened out from it using xgboost model with to the highest answer of revert statement matching degree, this is answered
Case returns to user as revert statement.
The step of various methods divide above, be intended merely to describe it is clear, when realization can be merged into a step or
Certain steps are split, multiple steps are decomposed into, as long as comprising identical logical relation, all in the protection scope of this patent
It is interior;To adding inessential modification in algorithm or in process or introducing inessential design, but its algorithm is not changed
Core design with process is all in the protection scope of the patent.
Second embodiment of the invention is related to a kind of dialogue generating means based on artificial intelligence, as shown in figure 4, comprising:
Candidate replys retrieval module 301, acquisition module 302 is replied in prediction and revert statement obtains module 303.Specifically:
Candidate replys retrieval module 301, for obtaining to revert statement, will input retrieval type model to revert statement,
In, retrieval type model is used to filter out the K candidate reply responded to revert statement from preset dialogue corpus, and K is positive
Integer;
Prediction, which is replied, obtains module 302, and K for obtaining the output of retrieval type model is candidate to be replied, and will language be replied
Sentence inputs production model with K candidate reply, wherein production model is according to revert statement, K candidate reply and dictionary
In the inverse document frequency of each word filter out prediction word, and export and replied using the prediction of prediction word composition;
Revert statement obtains module 303, obtains revert statement for replying according to prediction.
Further, prediction is replied acquisition module 302 and is also used to:
It treats revert statement and K candidate reply is encoded, obtain vector sum K to be replied candidate reply vector.
Context vector is obtained according to vector sum K to be replied candidate vector of replying;
Based on context vector sum inverse document frequency calculates the comprehensive score of each word in dictionary, and obtains comprehensive score most
High word is as prediction word;
It obtains and is replied according to the prediction of prediction word composition.
Further, it treats revert statement and K candidate reply is encoded, comprising:
Revert statement is treated using same encoder and K candidate reply is encoded.
Further, context vector is obtained according to vector sum K to be replied candidate vector of replying, comprising:
It will splice wait reply vector sum K candidate reply after vector maps to different vector spaces, according to the knot of splicing
Fruit obtains context vector.
Further, production model includes the first parameter matrix and the second parameter matrix;
It will splice wait reply vector sum K candidate reply after vector maps to different vector spaces, according to the knot of splicing
Fruit obtains context vector, comprising:
Vector to be replied is multiplied with the first parameter matrix, obtains transformed vector to be replied;
K candidate reply vector is multiplied with the second parameter matrix, K candidate reply vector after being converted;
Transformed vector to be replied and transformed K candidate reply are spliced, context vector is obtained.
Further, based on context vector sum inverse document frequency calculate dictionary in each word comprehensive score, comprising:
The comprehensive score of each word is calculated according to following first calculation formula:
P(yt|yt-1, q, r) and=α * softmax_score (w)+β * idf (w);
Wherein, P (yt|yt-1, q, r) be each word comprehensive score, ytIt is the prediction word of t moment, q is to revert statement, r
It is replied for candidate, α and β are the parameter for presetting production model, and idf (w) is the inverse document frequency of each word, softmax_
Score (w) is the normalization exponential function value of each word, is calculated with following second calculation formula:
Wherein,For the output of the hidden layer of t moment production model, CinputFor context vector.
Further, revert statement obtains module 303 and is also used to: prediction being replied and K candidate replys default point of input
In class model, the result of default disaggregated model output is obtained as revert statement.
It is not difficult to find that present embodiment is Installation practice corresponding with first embodiment, present embodiment can be with
First embodiment is worked in coordination implementation.The relevant technical details mentioned in first embodiment still have in the present embodiment
Effect, in order to reduce repetition, which is not described herein again.Correspondingly, the relevant technical details mentioned in present embodiment are also applicable in
In first embodiment.
It is noted that each module involved in present embodiment is logic module, and in practical applications, one
A logic unit can be a physical unit, be also possible to a part of a physical unit, can also be with multiple physics lists
The combination of member is realized.In addition, in order to protrude innovative part of the invention, it will not be with solution institute of the present invention in present embodiment
The technical issues of proposition, the less close unit of relationship introduced, but this does not indicate that there is no other single in present embodiment
Member.
Third embodiment of the invention is related to a kind of network equipment, as shown in figure 5, including at least one processor 401;With
And the memory 402 with the communication connection of at least one processor 401;Wherein, be stored with can be by least one for memory 402
The instruction that device 401 executes is managed, instruction is executed by least one processor 401, so that at least one processor 401 is able to carry out
The dialogue generation method based on artificial intelligence stated.
Wherein, memory 402 is connected with processor 401 using bus mode, and bus may include any number of interconnection
Bus and bridge, bus is by one or more processors 401 together with the various circuit connections of memory 402.Bus may be used also
With by such as peripheral equipment, voltage-stablizer, together with various other circuit connections of management circuit or the like, these are all
It is known in the art, therefore, it will not be further described herein.Bus interface provides between bus and transceiver
Interface.Transceiver can be an element, be also possible to multiple element, such as multiple receivers and transmitter, provide for
The unit communicated on transmission medium with various other devices.The data handled through processor 401 pass through antenna on the radio medium
It is transmitted, further, antenna also receives data and transfers data to processor 401.
Processor 401 is responsible for management bus and common processing, can also provide various functions, including timing, periphery connects
Mouthful, voltage adjusting, power management and other control functions.And memory 402 can be used for storage processor 401 and execute
Used data when operation.
Four embodiment of the invention is related to a kind of computer readable storage medium, is stored with computer program.Computer
Above method embodiment is realized when program is executed by processor.
That is, it will be appreciated by those skilled in the art that implementing the method for the above embodiments is that can lead to
Program is crossed to instruct relevant hardware and complete, which is stored in a storage medium, including some instructions use so that
One equipment (can be single-chip microcontroller, chip etc.) or processor (processor) execute each embodiment the method for the application
All or part of the steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
It will be understood by those skilled in the art that the respective embodiments described above are to realize specific embodiments of the present invention,
And in practical applications, can to it, various changes can be made in the form and details, without departing from the spirit and scope of the present invention.