CN108038209A - Answer system of selection, device and computer-readable recording medium - Google Patents
Answer system of selection, device and computer-readable recording medium Download PDFInfo
- Publication number
- CN108038209A CN108038209A CN201711363369.9A CN201711363369A CN108038209A CN 108038209 A CN108038209 A CN 108038209A CN 201711363369 A CN201711363369 A CN 201711363369A CN 108038209 A CN108038209 A CN 108038209A
- Authority
- CN
- China
- Prior art keywords
- mrow
- msub
- msubsup
- answer
- selection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/205—Parsing
- G06F40/216—Parsing using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
- G06F40/289—Phrasal analysis, e.g. finite state techniques or chunking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
Abstract
The invention discloses a kind of answer system of selection, including:Obtain the identification model of default contextual information and including at least session information to be answered a question;Context in the session information is identified according to the identification model, determines relevant contextual information in the session information;Relevant contextual information is segmented, and the reverse document-frequency TF IDF values of the word frequency for calculating each word in relevant contextual information;The keyword of default quantity is extracted from relevant contextual information according to the TF IDF values;According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of selection rule select corresponding answer.The invention also discloses a kind of answer selection device and computer-readable recording medium.The present invention can improve the accuracy of answer selection.
Description
Technical field
The present invention relates to field of information processing, more particularly to a kind of answer system of selection, device and computer-readable storage
Medium.
Background technology
In recent years, with the rapid development of Internet, information resources just exponentially increase.Abundant internet information money
Source brings great convenience to the life of people, and intelligent robot is also in the gradually ripe development of each field.
But at present intelligent robot according in session information the problem of the accurate of optimal answer is selected in knowledge base
Rate is than relatively low.
The content of the invention
It is a primary object of the present invention to provide a kind of answer system of selection, device and computer-readable recording medium, purport
Improving the accuracy rate of answer selection.
To achieve the above object, the present invention provides a kind of answer system of selection, and the answer system of selection includes following step
Suddenly:
Obtain the identification model of default contextual information and including at least session information to be answered a question;
Context in the session information is identified according to the identification model, is determined related in the session information
Contextual information;
Relevant contextual information is segmented, and calculates word frequency-reverse text of each word in relevant contextual information
Part frequency TF-IDF values;
The keyword of default quantity is extracted from relevant contextual information according to the TF-IDF values;
According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of selection rule selection pair
The answer answered.
Alternatively, the keyword according to extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of select
Selecting the step of rule selects corresponding answer includes:
According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, candidate answers are obtained;
Scored candidate answers according to preset rules, and answered according to the scoring of each candidate answers selection scoring is highest
Case.
Alternatively, the answer information that selection is treated according to preset rules scores, and according to respectively treating answering for selection
The step of scoring selection scoring highest answer of case information, includes:
Build every words and each candidate time in the session information respectively by term vector model WE and bag of words
The matching vector answered;
The matching vector and default convolutional neural networks CNN answered using every words of structure and each candidate extract institute
State the pond feature in session information and candidate's answer;
The scoring for calculating candidate and answering is encoded by LSTM according to the pond feature of extraction.
Alternatively, every words in the session information and the matching of each candidate answer are built by term vector model WE
The formula of vector is as follows:
Wherein, e represents term vector, and u represents a word in session information, and i represents position of the word in the words, r tables
Show that some candidate answers, j represents position of the word in corresponding candidate answers.
Alternatively, the matching vector answered using every words of structure and each candidate and default convolutional Neural net
Network CNN extracts the session information and candidate answer in pond feature the step of include:
Convolution feature is extracted by binary channels convolutional layer;
Pond feature is obtained from the convolution feature according to the first preset algorithm;
The calculation formula of wherein binary channels convolutional layer is as follows:
F=1,2, term vector passage or bag of words passage are represented respectively, F represents the quantity of characteristic pattern, and l represents the number of plies,
W, b represents parameter;
First preset algorithm is:
PWAnd PhThe width and height in two-dimentional pond are represented respectively.
Alternatively, the step of pond feature according to extraction encodes the scoring of calculating candidate's answer by LSTM is wrapped
Include:
Hidden state h is encoded to by LSTM according to the pond feature;
The hidden state h is subjected to parametrization as input and obtains corresponding hiding layer state;
The hiding layer state is normalized to obtain the scoring of candidate's answer by returning cost function.
Alternatively, described the step of being encoded to hidden state h by LSTM according to the pond feature, includes:
Hidden state h is obtained by following algorithm:
it=σ (W(i)xt+U(i)ht-1+b(i))
ft=σ (W(f)xt+U(f)ht-1+b(f))
ot=σ (W(o)xt+U(o)ht-1+b(o))
ut=tanh (W(u)xt+U(u)ht-1+b(u))
Wherein, f is to forget a forgetgate, determines what information is lost from cell state;O is out gate
Outputgate, determines what value of output;U representing matrixes, d are by renewal of i and f the two information generations to state;C is
I, the new cell state that f and d is obtained after updating together.
Alternatively, the step that the corresponding hiding layer state of parametrization acquisition is carried out using the hidden state h as input
Suddenly include:
Last hidden state is as follows as hiding layer state, calculation formula:
Or, all hidden states are calculated by linear combination and calculate the acquisition hiding layer state, calculation formula is as follows:
Or, combine all hidden states using notice mechanism and calculate the acquisition hiding layer state, calculation formula is as follows:
Alternatively, the recurrence cost function is:
In addition, to achieve the above object, the present invention also provides a kind of answer selection device, answer selection device includes:Deposit
Reservoir, processor and it is stored in the computer program that can be run on the memory and on the processor, the computer
The step of program realizes method as described above when being performed by the processor.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer-readable recording medium
Answer option program is stored with storage medium, the answer option program realizes answer as described above when being executed by processor
The step of system of selection.
The embodiment of the present invention is by obtaining the identification model of default contextual information and including at least to be answered a question
Session information;Context in the session information is identified according to the identification model, determines phase in the session information
The contextual information of pass;Relevant contextual information is segmented, and calculate the word frequency of each word in relevant contextual information-
Reverse document-frequency TF-IDF values;The key of default quantity is extracted from relevant contextual information according to the TF-IDF values
Word;According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of selection rule selection it is corresponding
Answer.By the above-mentioned means, it is of the invention by first identifying context in session information, and calculate each in relevant contextual information
The keyword of quantity is preset in the word frequency of word-reverse document-frequency TF-IDF values extraction, and according to problem and keyword from knowledge base
The corresponding candidate answers of middle selection, the CNN-LSTM rules then blended by default word-based vector sum bag of words come
Optimum answer is selected, so as to improve the accuracy of answer selection.
Brief description of the drawings
Fig. 1 is the apparatus structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is the flow diagram of answer system of selection first embodiment of the present invention;
Fig. 3 is according to the keyword of extraction and retrieval knowledge storehouse to be answered a question in the embodiment of the present invention, and by default
The problem of selection rule select corresponding answer the step of refinement flow diagram;
Fig. 4 scores candidate answers according to preset rules for the embodiment of the present invention, and commenting according to each candidate answers
The refinement flow diagram of the step of component selections scoring highest answer.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, the terminal structure schematic diagram for the hardware running environment that Fig. 1, which is the embodiment of the present invention, to be related to.
Terminal of the embodiment of the present invention can be PC or smart mobile phone, tablet computer, pocket computer etc. with aobvious
Show the packaged type terminal device of function.
As shown in Figure 1, the terminal can include:Processor 1001, such as CPU, communication bus 1002, user interface
1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is used for realization the connection communication between these components.
User interface 1003 can include display screen (Display), input unit such as keyboard (Keyboard), optional user interface
1003 can also include standard wireline interface and wireless interface.Network interface 1004 can optionally connect including the wired of standard
Mouth, wave point (such as WI-FI interfaces).Memory 1005 can be high-speed RAM memory or the memory of stabilization
(non-volatile memory), such as magnetic disk storage.Memory 1005 optionally can also be independently of aforementioned processor
1001 storage device.
It will be understood by those skilled in the art that the restriction of the terminal structure shown in Fig. 1 not structure paired terminal, can wrap
Include than illustrating more or fewer components, either combine some components or different components arrangement.
As shown in Figure 1, it can lead to as in a kind of memory 1005 of computer-readable storage medium including operating system, network
Believe module, Subscriber Interface Module SIM and answer option program.
In the terminal shown in Fig. 1, network interface 1004 is mainly used for connecting background server, is carried out with background server
Data communication;User interface 1003 is mainly used for connecting client (user terminal), with client into row data communication;And processor
1001 can be used for calling the answer option program stored in memory 1005, and perform following operation:
Obtain the identification model of default contextual information and including at least session information to be answered a question;
Context in the session information is identified according to the identification model, is determined related in the session information
Contextual information;
Relevant contextual information is segmented, and calculates word frequency-reverse text of each word in relevant contextual information
Part frequency TF-IDF values;
The keyword of default quantity is extracted from relevant contextual information according to the TF-IDF values;
According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of selection rule selection pair
The answer answered.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, candidate answers are obtained;
Scored candidate answers according to preset rules, and answered according to the scoring of each candidate answers selection scoring is highest
Case.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
Build every words and each candidate time in the session information respectively by term vector model WE and bag of words
The matching vector answered;
The matching vector and default convolutional neural networks CNN answered using every words of structure and each candidate extract institute
State the pond feature in session information and candidate's answer;
The scoring for calculating candidate and answering is encoded by LSTM according to the pond feature of extraction.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
The matching vector that every words in the session information are answered with each candidate is built by term vector model WE
Formula is as follows:
Wherein, e represents term vector, and u represents a word in session information, and i represents position of the word in the words, r tables
Show that some candidate answers, j represents position of the word in corresponding candidate answers.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
Convolution feature is extracted by binary channels convolutional layer;
Pond feature is obtained from the convolution feature according to the first preset algorithm;
The calculation formula of wherein binary channels convolutional layer is as follows:
F=1,2, term vector passage or bag of words passage are represented respectively, F represents the quantity of characteristic pattern, and l represents the number of plies,
W, b represents parameter;
First preset algorithm is:
PWAnd PhThe width and height in two-dimentional pond are represented respectively.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
Hidden state h is encoded to by LSTM according to the pond feature;
The hidden state h is subjected to parametrization as input and obtains corresponding hiding layer state;
The hiding layer state is normalized to obtain the scoring of candidate's answer by returning cost function.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
Hidden state h is obtained by following algorithm:
it=σ (W(i)xt+U(i)ht-1+b(i))
ft=σ (W(f)xt+U(f)ht-1+b(f))
ot=σ (W(o)xt+U(o)ht-1+b(o))
ut=tanh (W(u)xt+U(u)ht-1+b(u))
Wherein, f is to forget a forget gate, determines what information is lost from cell state;O is out gate
Output gate, determine what value of output;U representing matrixes, d are by renewal of i and f the two information generations to state;c
It is the new cell state obtained after i, f and d update together.
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
Last hidden state is as follows as hiding layer state, calculation formula:
Or, all hidden states are calculated by linear combination and calculate the acquisition hiding layer state, calculation formula is as follows:
Or, combine all hidden states using notice mechanism and calculate the acquisition hiding layer state, calculation formula is as follows:
Further, processor 1001 can call the answer option program stored in memory 1005, also perform following
Operation:
It is described recurrence cost function be:
The present invention is essentially identical using the specific embodiment of answer selection device and following each embodiments of answer system of selection,
Therefore not to repeat here.
With reference to Fig. 2, Fig. 2 is the flow diagram of answer system of selection first embodiment of the present invention, the answer system of selection
Including:
Step S10, obtains the identification model of default contextual information and including at least session information to be answered a question;
Current session information is first extracted in the present embodiment from conversational system, current session information includes:Context
Information and at least one is waited to answer a question, naturally it is also possible to, should including one or more of the problem of having answered and corresponding answer
Conversational system is user and the conversational system of robot, and obtains default contextual information identification model, the contextual information
Identification model is to preset trained contextual information identification model, and specifically, which can train by the following method:
Artificial customer service and the session information of user are first extracted from conversational system or customer service system, session information includes using
The problem of family and manual answering's information, manually carry out the session information got the mark of default quantity, for example mark 1000
Data collects as verification.The session information obtained pre-processes it according to preset rules, and calculates and obtains the session
Classification indicators in information, classification indicators include the maximum of word in the first information entropy of the session information, the session information
The ratio of demonstrative pronoun, the meeting in the average length of answer information, the session information in distribution probability, the session information
Part of speech species ratio in the proportion of session information, the session information where keyword in words information.Then according to the first information
Session information where entropy, the maximum distribution probability of participle, the average length for answering information, the ratio of demonstrative pronoun, keyword
Proportion, the ratio of part of speech species combine verification collection training SVM classifier, and using the SVM classifier after training to the session
The information not marked in information is labeled, and generates data set;The finally input using the data set as GRU models, training
Go out the identification model for identifying contextual information in the session information.Specifically, by all session informations according to subject, object,
Verb etc. is segmented, and obtains the word in all session informations, is then calculated the distribution probability of each word, is denoted as Pi.Point of word
Cloth probability calculation process is not done herein similarly to the prior art excessively to be repeated.Then the word distribution probability conduct obtained according to calculating
The maximum distribution probability of word is calculated as input for first preset algorithm, wherein the first preset algorithm is:
piRepresent the distribution probability of i-th of word in the session information, P represents the set of the distribution probability of each word, M
(P) the maximum distribution probability of word is represented.
The second comentropy for obtaining and information being answered in the bout information is calculated according to the second preset algorithm, then according to acquisition
Comentropy in maximum informational entropy and minimal information entropy be normalized, the first information entropy is obtained so as to calculate, wherein the
Two preset algorithms are:
E (P) represents the second comentropy, and entropy represents first information entropy.
After being segmented to session information, session information is analyzed always according to word segmentation result, determines session information
In demonstrative pronoun, analysis first can also be carried out to session information in specific implementation certainly and directly obtain demonstrative pronoun therein,
Then the ratio according to shared by the 3rd preset algorithm calculates demonstrative pronoun in session information, wherein the 3rd preset algorithm is:
Count represents to count, and d represents demonstrative pronoun, and word represents the word in each sentence in the session information,
Rate_d represents the ratio shared by demonstrative pronoun.
Because different artificial customer services, when answering same problem, the statement of use may be different, therefore the present embodiment
In be directed to the corresponding manually multiple answer information of customer service of same problem in session information and calculated using the 4th preset algorithm,
The average length of multiple answer information of same problem in session information is obtained, and between result of calculation is normalized to [0,1],
4th preset algorithm is:
An represents the length of n-th of answer information of same problem, and Ei (A) represents the average length of i-th of problem, Y tables
Show the length after normalization.
Different field is provided with corresponding keyword, the present embodiment first determines the field belonging to session information, then
The corresponding keyword in the field is selected from session information, and selection keyword is calculated in session information according to the 5th preset algorithm
In proportion, the 5th preset algorithm is:
K represents field keyword, and word represents the word in sentence, and rate_k represents proportion.
According to the analysis to session information, determine the part of speech species having in session information, part of speech species counted,
Then accounted for according to part of speech species in every session information in every session information in the 6th preset algorithm calculating session information
The ratio of all part of speech species, wherein the 6th preset algorithm is:
J represents the quantity of part of speech species, and word represents the word in sentence, and rate_j represents part of speech kind in every session information
Class accounts for the ratio of all part of speech species.
The identification model of other contextual informations can also be used in certain the present embodiment, is not limited herein.
Step S20, is identified context in the session information according to the identification model, determines the session letter
Relevant contextual information in breath;
Step S30, segments relevant contextual information, and calculates the word of each word in relevant contextual information
Frequently-reverse document-frequency TF-IDF values;
Step S40, the keyword of default quantity is extracted according to the TF-IDF values from relevant contextual information;
Context in the session information of acquisition is known according to the identification model of the contextual information obtained in step S10
Not, wherein relevant contextual information is determined, other information is then incoherent contextual information, then to relevant context
Segmented, determine the word that relevant context includes, then each word in relevant context is calculated, obtained each
The word frequency of word-reverse document-frequency TF-IDF values is related according to the word frequency-reverse document-frequency TF-IDF value extractions for calculating acquisition
Context in preset quantity key, such as extraction 5 keywords, 6/7 keyword can also be extracted in specific implementation.
Step S50, according to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of selection advise
Then select corresponding answer.
According to the keyword of extraction and user propose it is to be answered the problem of retrieve in default instruction storehouse, from retrieval knot
Select corresponding answer information as answer feedback to user in fruit.
Further, can include refering to Fig. 3, step S50:
Step S51, according to the keyword of extraction and retrieval knowledge storehouse to be answered a question, obtains candidate answers;
Step S52, scores candidate answers according to preset rules, and selects to score according to the scoring of each candidate answers
Highest answer.
Since cognition of the different people to a problem is different, the emphasis that different people asks a question is different, and different people is to same
The emphasis of question answering may also be different, and the problem of causing to correspond to same type in knowledge base can be stored with multiple answers, be
The keyword in user experience this implementations according to extraction and retrieval knowledge storehouse to be answered a question are improved, obtains candidate answers, work
For answer Candidate Set, then scored according to preset rules candidate answers in answer Candidate Set, then select wherein to score
Highest answer feeds back to user as answer.
It should be noted that the session information that the present invention includes include answer the problem of and answer it is more really
Fixed answer is then more accurate.
Further, if by context identification model it is unidentified go out relevant contextual information, i.e. context believes
Cease and uncorrelated, then retrieval obtains corresponding answer in knowledge base directly according to the problem of user.
The embodiment of the present invention is by obtaining the identification model of default contextual information and including session to be answered a question
Information;Context in the session information is identified according to the identification model, is determined relevant in the session information
Contextual information and incoherent contextual information;Relevant contextual information is segmented, and calculates relevant context
The word frequency of each word-reverse document-frequency TF-IDF values in information;Carried according to the TF-IDF values from relevant contextual information
Take the keyword of default quantity;According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of select
Select rule and select corresponding answer.By the above-mentioned means, it is of the invention by first identifying context in session information, and calculate correlation
Contextual information in word frequency-reverse document-frequency TF-IDF values extraction of each word preset the keyword of quantity, and according to problem
Corresponding candidate answers are selected from knowledge base with keyword, are then blended by default word-based vector sum bag of words
CNN-LSTM rules select optimum answer, so as to improve the accuracy of answer selection.
Further, with reference to Fig. 4, Fig. 4 scores candidate answers according to preset rules for the embodiment of the present invention, and root
The refinement flow diagram for the step of selecting scoring highest answer according to the scoring of each candidate answers, based on above-described embodiment, step
Rapid S52 can include:
Step S521, every in session information word and every are built by term vector model WE and bag of words respectively
The matching vector that a candidate answers;
Answered in the present embodiment by every words in the term vector model WE structures session information and each candidate
The formula of matching vector is as follows:
Wherein, e represents term vector, and u represents a word in session information, and i represents position of the word in the words, r tables
Show that some candidate answers, j represents position of the word in corresponding candidate answers.
Matching vector is built by bag of words, ignores its word order and grammer, syntax, is only regarded as some vocabulary
Set, and each vocabulary in text is independent, counts the frequency that each vocabulary occurs.
Step S522, the matching vector answered using every words of structure and each candidate and default convolutional neural networks
CNN extracts the pond feature in the session information and candidate's answer;
The matching vector answered in the present embodiment using every words of structure and each candidate and default convolutional Neural net
Network CNN extracts the session information and candidate answer in pond feature the step of include:
Convolution feature is extracted by binary channels convolutional layer;
Pond feature is obtained from the convolution feature according to the first preset algorithm;
The calculation formula of wherein binary channels convolutional layer is as follows:
F=1,2, term vector passage or bag of words passage are represented respectively, F represents the quantity of characteristic pattern, and l represents the number of plies,
W, b represents parameter;
First preset algorithm is:
PWAnd PhThe width and height in two-dimentional pond are represented respectively.
Step S523, the scoring for calculating candidate and answering is encoded according to the pond feature of extraction by LSTM.
The scoring for calculating candidate and answering is encoded by LSTM according to above-mentioned result of calculation, is waited wherein encoding to calculate by LSTM
The step of selecting the scoring answered includes:
Hidden state h is encoded to by LSTM according to the pond feature;
The hidden state h is subjected to parametrization as input and obtains corresponding hiding layer state;
The hiding layer state is normalized to obtain the scoring of candidate's answer by returning cost function.
Wherein, hidden state h is obtained by following algorithm:
it=σ (W(i)xt+U(i)ht-1+b(i))
ft=σ (W(f)xt+U(f)ht-1+b(f))
ot=σ (W(o)xt+U(o)ht-1+b(o))
ut=tanh (W(u)xt+U(u)ht-1+b(u))
Wherein, f is to forget a forget gate, determines what information is lost from cell state;O is out gate
Output gate, determine what value of output;U representing matrixes, d are by renewal of i and f the two information generations to state;c
It is the new cell state obtained after i, f and d update together.
Wherein, the mode that the corresponding hiding layer state of parametrization acquisition is carried out using the hidden state h as input
There are three kinds, including:
1) it is last hidden state is as follows as hiding layer state, calculation formula:
2) all hidden states are calculated by linear combination and calculates the acquisition hiding layer state, calculation formula is as follows:
3) all hidden states are combined using notice mechanism and calculates the acquisition hiding layer state, calculation formula is as follows:
Wherein, the recurrence cost function is:Calculate candidate answers
Scoring is ranked up after scoring, returns to highest answer of scoring as last result.
In addition, the embodiment of the present invention also proposes a kind of computer-readable recording medium, the computer-readable recording medium
On be stored with answer option program, the answer option program realizes answer system of selection as described above when being executed by processor
The step of.
The specific embodiment of computer-readable recording medium of the present invention and the basic phase of above-mentioned each embodiment of answer system of selection
Together, therefore not to repeat here.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row
His property includes, so that process, method, article or system including a series of elements not only include those key elements, and
And other elements that are not explicitly listed are further included, or further include as this process, method, article or system institute inherently
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there are other identical element in the process of key element, method, article or system.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme substantially in other words does the prior art
Going out the part of contribution can be embodied in the form of software product, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disc, CD), including some instructions use so that a station terminal equipment (can be mobile phone,
Computer, server, air conditioner, or network equipment etc.) perform method described in each embodiment of the present invention.
It these are only the preferred embodiment of the present invention, be not intended to limit the scope of the invention, it is every to utilize this hair
The equivalent structure or equivalent flow shift that bright specification and accompanying drawing content are made, is directly or indirectly used in other relevant skills
Art field, is included within the scope of the present invention.
Claims (11)
1. a kind of answer system of selection, it is characterised in that the answer system of selection comprises the following steps:
Obtain the identification model of default contextual information and including at least session information to be answered a question;
Context in the session information is identified according to the identification model, is determined on relevant in the session information
Context information;
Relevant contextual information is segmented, and calculates word frequency-reverse file frequency of each word in relevant contextual information
Rate TF-IDF values;
The keyword of default quantity is extracted from relevant contextual information according to the TF-IDF values;
According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, and by it is default the problem of selection rule selection it is corresponding
Answer.
2. answer system of selection as claimed in claim 1, it is characterised in that the keyword according to extraction and treat that answer is asked
Inscribe retrieval knowledge storehouse, and by it is default the problem of selection rule select corresponding answer the step of include:
According to the keyword of extraction and retrieval knowledge storehouse to be answered a question, candidate answers are obtained;
Scored according to preset rules candidate answers, and according to the highest answer of the scoring of each candidate answers selection scoring.
3. answer system of selection as claimed in claim 2, it is characterised in that the answer that selection is treated according to preset rules
Information scores, and is included according to the step of scoring for the answer information for respectively treating selection selection scoring highest answer:
Build what every words in the session information were answered with each candidate respectively by term vector model WE and bag of words
Matching vector;
The matching vector and default convolutional neural networks CNN answered using every words of structure and each candidate extract the meeting
Talk about the pond feature in information and candidate's answer;
The scoring for calculating candidate and answering is encoded by LSTM according to the pond feature of extraction.
4. answer system of selection as claimed in claim 3, it is characterised in that the session is built by term vector model WE and is believed
The formula for the matching vector that every words and each candidate in breath are answered is as follows:
<mrow>
<msub>
<mi>e</mi>
<mrow>
<mn>1</mn>
<mo>,</mo>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<mo>=</mo>
<msubsup>
<mi>e</mi>
<mrow>
<mi>u</mi>
<mo>,</mo>
<mi>i</mi>
</mrow>
<mi>T</mi>
</msubsup>
<mo>&CenterDot;</mo>
<msub>
<mi>e</mi>
<mrow>
<mi>r</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
</msub>
<mo>,</mo>
</mrow>
Wherein, e represents term vector, and u represents a word in session information, and i represents position of the word in the words, and r represents certain
A candidate answers, and j represents position of the word in corresponding candidate answers.
5. answer system of selection as claimed in claim 4, it is characterised in that described to utilize every words of structure and each candidate
The matching vector of answer and default convolutional neural networks CNN extract the session information and candidate answer in pond feature
Step includes:
Convolution feature is extracted by binary channels convolutional layer;
Pond feature is obtained from the convolution feature according to the first preset algorithm;
The calculation formula of wherein binary channels convolutional layer is as follows:
<mrow>
<msubsup>
<mi>z</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<mi>&sigma;</mi>
<mrow>
<mo>(</mo>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<msup>
<mi>f</mi>
<mo>&prime;</mo>
</msup>
<mo>=</mo>
<mn>0</mn>
</mrow>
<msub>
<mi>F</mi>
<mrow>
<mi>l</mi>
<mo>-</mo>
<mn>1</mn>
</mrow>
</msub>
</munderover>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>s</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<msubsup>
<mi>r</mi>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
</munderover>
<munderover>
<mi>&Sigma;</mi>
<mrow>
<mi>t</mi>
<mo>=</mo>
<mn>0</mn>
</mrow>
<msubsup>
<mi>r</mi>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
</munderover>
<msubsup>
<mi>W</mi>
<mrow>
<mi>s</mi>
<mo>,</mo>
<mi>t</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>&CenterDot;</mo>
<msubsup>
<mi>z</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mi>s</mi>
<mo>,</mo>
<mi>j</mi>
<mo>+</mo>
<mi>t</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>-</mo>
<mn>1</mn>
<mo>,</mo>
<msup>
<mi>f</mi>
<mo>&prime;</mo>
</msup>
<mo>)</mo>
</mrow>
</msubsup>
<mo>+</mo>
<msup>
<mi>b</mi>
<mrow>
<mi>l</mi>
<mo>,</mo>
<mi>k</mi>
</mrow>
</msup>
<mo>)</mo>
</mrow>
</mrow>
F=1,2, term vector passage or bag of words passage are represented respectively, and F represents the quantity of characteristic pattern, and l represents the number of plies, W, b table
Show parameter;
First preset algorithm is:
<mrow>
<msubsup>
<mi>z</mi>
<mrow>
<mi>i</mi>
<mo>,</mo>
<mi>j</mi>
</mrow>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>=</mo>
<munder>
<mi>max</mi>
<mrow>
<msubsup>
<mi>P</mi>
<mi>w</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>></mo>
<mi>a</mi>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
</mrow>
</munder>
<mo>,</mo>
<munder>
<mi>max</mi>
<mrow>
<msubsup>
<mi>P</mi>
<mi>h</mi>
<mrow>
<mo>(</mo>
<mi>l</mi>
<mo>,</mo>
<mi>f</mi>
<mo>)</mo>
</mrow>
</msubsup>
<mo>></mo>
<mi>t</mi>
<mo>&GreaterEqual;</mo>
<mn>0</mn>
</mrow>
</munder>
<msub>
<mi>z</mi>
<mrow>
<mi>i</mi>
<mo>+</mo>
<mi>s</mi>
<mo>,</mo>
<mi>j</mi>
<mo>+</mo>
<mi>t</mi>
</mrow>
</msub>
<mo>,</mo>
</mrow>
PWAnd PhThe width and height in two-dimentional pond are represented respectively.
6. answer system of selection as claimed in claim 5, it is characterised in that the pond feature according to extraction passes through LSTM
The step of scoring that coding calculating candidate answers, includes:
Hidden state h is encoded to by LSTM according to the pond feature;
The hidden state h is subjected to parametrization as input and obtains corresponding hiding layer state;
The hiding layer state is normalized to obtain the scoring of candidate's answer by returning cost function.
7. answer system of selection as claimed in claim 6, it is characterised in that described to be compiled according to the pond feature by LSTM
The step of code is hidden state h includes:
Hidden state h is obtained by following algorithm:
it=σ (W(i)xt+U(i)ht-1+b(i))
ft=σ (W(f)xt+U(f)ht-1+b(f))
ot=σ (W(o)xt+U(o)ht-1+b(o))
ut=tanh (W(u)xt+U(u)ht-1+b(u))
<mrow>
<msub>
<mi>c</mi>
<mi>t</mi>
</msub>
<mo>=</mo>
<msub>
<mi>i</mi>
<mi>t</mi>
</msub>
<mo>&CircleTimes;</mo>
<msub>
<mi>d</mi>
<mi>t</mi>
</msub>
<mo>+</mo>
<msub>
<mi>f</mi>
<mi>t</mi>
</msub>
<mo>&CircleTimes;</mo>
<msub>
<mi>c</mi>
<mrow>
<mo>(</mo>
<mi>t</mi>
<mo>-</mo>
<mn>1</mn>
<mo>)</mo>
</mrow>
</msub>
</mrow>
<mrow>
<msub>
<mi>h</mi>
<mi>t</mi>
</msub>
<mo>=</mo>
<msub>
<mi>o</mi>
<mi>t</mi>
</msub>
<mo>&CircleTimes;</mo>
<mi>tanh</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>c</mi>
<mi>t</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
Wherein, f is to forget a forgetgate, determines what information is lost from cell state;O is out gate
Outputgate, determines what value of output;U representing matrixes, d are by renewal of i and f the two information generations to state;C is
I, the new cell state that f and d is obtained after updating together.
8. answer system of selection as claimed in claim 6, it is characterised in that it is described using the hidden state h as input into
The step of row parametrization obtains corresponding hiding layer state includes:
Last hidden state is as follows as hiding layer state, calculation formula:
Or, all hidden states are calculated by linear combination and calculate the acquisition hiding layer state, calculation formula is as follows:
<mrow>
<mi>L</mi>
<mo>&lsqb;</mo>
<msubsup>
<mi>h</mi>
<mn>1</mn>
<mo>&prime;</mo>
</msubsup>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msubsup>
<mi>h</mi>
<mi>n</mi>
<mo>&prime;</mo>
</msubsup>
<mo>&rsqb;</mo>
<mo>=</mo>
<msubsup>
<mi>&Sigma;</mi>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</msubsup>
<msub>
<mi>w</mi>
<mi>i</mi>
</msub>
<msubsup>
<mi>h</mi>
<mi>i</mi>
<mo>&prime;</mo>
</msubsup>
<mo>;</mo>
</mrow>
Or, combine all hidden states using notice mechanism and calculate the acquisition hiding layer state, calculation formula is as follows:
<mrow>
<msub>
<mi>t</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mi>tanh</mi>
<mrow>
<mo>(</mo>
<msub>
<mi>W</mi>
<mrow>
<mn>1</mn>
<mo>,</mo>
<mn>1</mn>
</mrow>
</msub>
<msub>
<mi>h</mi>
<mrow>
<msub>
<mi>d</mi>
<mi>i</mi>
</msub>
<mo>,</mo>
<msub>
<mi>n</mi>
<mi>u</mi>
</msub>
</mrow>
</msub>
<mo>+</mo>
<msub>
<mi>W</mi>
<mrow>
<mn>1</mn>
<mo>,</mo>
<mn>2</mn>
</mrow>
</msub>
<msubsup>
<mi>h</mi>
<mi>i</mi>
<mo>,</mo>
</msubsup>
<mo>+</mo>
<msub>
<mi>b</mi>
<mn>1</mn>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<mo>=</mo>
<mfrac>
<mrow>
<mi>exp</mi>
<mrow>
<mo>(</mo>
<msubsup>
<mi>t</mi>
<mi>i</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>t</mi>
<mi>s</mi>
</msub>
<mo>)</mo>
</mrow>
</mrow>
<mrow>
<msub>
<mi>&Sigma;</mi>
<mi>i</mi>
</msub>
<mrow>
<mo>(</mo>
<mi>exp</mi>
<mo>(</mo>
<msubsup>
<mi>t</mi>
<mi>i</mi>
<mi>T</mi>
</msubsup>
<msub>
<mi>t</mi>
<mi>s</mi>
</msub>
<mo>)</mo>
</mrow>
<mo>)</mo>
</mrow>
</mfrac>
</mrow>
<mrow>
<mi>L</mi>
<mo>&lsqb;</mo>
<msubsup>
<mi>h</mi>
<mn>1</mn>
<mo>,</mo>
</msubsup>
<mo>,</mo>
<mo>...</mo>
<mo>,</mo>
<msubsup>
<mi>h</mi>
<mi>n</mi>
<mo>,</mo>
</msubsup>
<mo>&rsqb;</mo>
<mo>=</mo>
<munderover>
<mo>&Sigma;</mo>
<mrow>
<mi>i</mi>
<mo>=</mo>
<mn>1</mn>
</mrow>
<mi>n</mi>
</munderover>
<msub>
<mi>&alpha;</mi>
<mi>i</mi>
</msub>
<msubsup>
<mi>h</mi>
<mi>i</mi>
<mo>,</mo>
</msubsup>
<mo>.</mo>
</mrow>
9. answer system of selection as claimed in claim 6, it is characterised in that it is described recurrence cost function be:
10. a kind of answer selection device, it is characterised in that the answer selection device includes:Memory, processor and it is stored in
On the memory and the computer program that can run on the processor, the computer program are performed by the processor
The step of Shi Shixian methods as claimed in any one of claims 1-9 wherein.
11. a kind of computer-readable recording medium, it is characterised in that answer choosing is stored with the computer-readable recording medium
Program is selected, the answer option program realizes answer selection as claimed in any one of claims 1-9 wherein when being executed by processor
The step of method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711363369.9A CN108038209A (en) | 2017-12-18 | 2017-12-18 | Answer system of selection, device and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711363369.9A CN108038209A (en) | 2017-12-18 | 2017-12-18 | Answer system of selection, device and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108038209A true CN108038209A (en) | 2018-05-15 |
Family
ID=62099706
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711363369.9A Pending CN108038209A (en) | 2017-12-18 | 2017-12-18 | Answer system of selection, device and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108038209A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108984475A (en) * | 2018-07-06 | 2018-12-11 | 北京慧闻科技发展有限公司 | Answer selection method, device and electronic equipment based on holographic neural network |
CN109165285A (en) * | 2018-08-24 | 2019-01-08 | 北京小米智能科技有限公司 | Handle the method, apparatus and storage medium of multi-medium data |
CN109325132A (en) * | 2018-12-11 | 2019-02-12 | 平安科技(深圳)有限公司 | Expertise recommended method, device, computer equipment and storage medium |
CN110232118A (en) * | 2019-08-08 | 2019-09-13 | 中山大学 | A kind of novel answer preference pattern based on GRU attention mechanism |
CN110263160A (en) * | 2019-05-29 | 2019-09-20 | 中国电子科技集团公司第二十八研究所 | A kind of Question Classification method in computer question answering system |
CN110597971A (en) * | 2019-08-22 | 2019-12-20 | 卓尔智联(武汉)研究院有限公司 | Automatic question answering device and method based on neural network and readable storage medium |
CN110750986A (en) * | 2018-07-04 | 2020-02-04 | 普天信息技术有限公司 | Neural network word segmentation system and training method based on minimum information entropy |
CN110852119A (en) * | 2019-11-11 | 2020-02-28 | 广州点动信息科技股份有限公司 | Automatic customer service system for electronic commerce |
CN110851574A (en) * | 2018-07-27 | 2020-02-28 | 北京京东尚科信息技术有限公司 | Statement processing method, device and system |
CN111144546A (en) * | 2019-10-31 | 2020-05-12 | 平安科技(深圳)有限公司 | Scoring method and device, electronic equipment and storage medium |
CN111966782A (en) * | 2020-06-29 | 2020-11-20 | 百度在线网络技术(北京)有限公司 | Retrieval method and device for multi-turn conversations, storage medium and electronic equipment |
CN113537206A (en) * | 2020-07-31 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Pushed data detection method and device, computer equipment and storage medium |
CN115577089A (en) * | 2022-11-24 | 2023-01-06 | 零犀(北京)科技有限公司 | Method, device, equipment and storage medium for optimizing nodes in conversation process |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104731771A (en) * | 2015-03-27 | 2015-06-24 | 大连理工大学 | Term vector-based abbreviation ambiguity elimination system and method |
CN106448670A (en) * | 2016-10-21 | 2017-02-22 | 竹间智能科技(上海)有限公司 | Dialogue automatic reply system based on deep learning and reinforcement learning |
CN106776828A (en) * | 2016-11-24 | 2017-05-31 | 竹间智能科技(上海)有限公司 | For keeping conversational system to talk with the method and system of continuity |
CN106844368A (en) * | 2015-12-03 | 2017-06-13 | 华为技术有限公司 | For interactive method, nerve network system and user equipment |
CN106886516A (en) * | 2017-02-27 | 2017-06-23 | 竹间智能科技(上海)有限公司 | The method and device of automatic identification statement relationship and entity |
CN107220506A (en) * | 2017-06-05 | 2017-09-29 | 东华大学 | Breast cancer risk assessment analysis system based on depth convolutional neural networks |
CN107229684A (en) * | 2017-05-11 | 2017-10-03 | 合肥美的智能科技有限公司 | Statement classification method, system, electronic equipment, refrigerator and storage medium |
CN107463699A (en) * | 2017-08-15 | 2017-12-12 | 济南浪潮高新科技投资发展有限公司 | A kind of method for realizing question and answer robot based on seq2seq models |
-
2017
- 2017-12-18 CN CN201711363369.9A patent/CN108038209A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104731771A (en) * | 2015-03-27 | 2015-06-24 | 大连理工大学 | Term vector-based abbreviation ambiguity elimination system and method |
CN106844368A (en) * | 2015-12-03 | 2017-06-13 | 华为技术有限公司 | For interactive method, nerve network system and user equipment |
CN106448670A (en) * | 2016-10-21 | 2017-02-22 | 竹间智能科技(上海)有限公司 | Dialogue automatic reply system based on deep learning and reinforcement learning |
CN106776828A (en) * | 2016-11-24 | 2017-05-31 | 竹间智能科技(上海)有限公司 | For keeping conversational system to talk with the method and system of continuity |
CN106886516A (en) * | 2017-02-27 | 2017-06-23 | 竹间智能科技(上海)有限公司 | The method and device of automatic identification statement relationship and entity |
CN107229684A (en) * | 2017-05-11 | 2017-10-03 | 合肥美的智能科技有限公司 | Statement classification method, system, electronic equipment, refrigerator and storage medium |
CN107220506A (en) * | 2017-06-05 | 2017-09-29 | 东华大学 | Breast cancer risk assessment analysis system based on depth convolutional neural networks |
CN107463699A (en) * | 2017-08-15 | 2017-12-12 | 济南浪潮高新科技投资发展有限公司 | A kind of method for realizing question and answer robot based on seq2seq models |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110750986A (en) * | 2018-07-04 | 2020-02-04 | 普天信息技术有限公司 | Neural network word segmentation system and training method based on minimum information entropy |
CN110750986B (en) * | 2018-07-04 | 2023-10-10 | 普天信息技术有限公司 | Neural network word segmentation system and training method based on minimum information entropy |
CN108984475A (en) * | 2018-07-06 | 2018-12-11 | 北京慧闻科技发展有限公司 | Answer selection method, device and electronic equipment based on holographic neural network |
CN110851574A (en) * | 2018-07-27 | 2020-02-28 | 北京京东尚科信息技术有限公司 | Statement processing method, device and system |
CN109165285A (en) * | 2018-08-24 | 2019-01-08 | 北京小米智能科技有限公司 | Handle the method, apparatus and storage medium of multi-medium data |
CN109325132A (en) * | 2018-12-11 | 2019-02-12 | 平安科技(深圳)有限公司 | Expertise recommended method, device, computer equipment and storage medium |
CN110263160B (en) * | 2019-05-29 | 2021-04-02 | 中国电子科技集团公司第二十八研究所 | Question classification method in computer question-answering system |
CN110263160A (en) * | 2019-05-29 | 2019-09-20 | 中国电子科技集团公司第二十八研究所 | A kind of Question Classification method in computer question answering system |
CN110232118A (en) * | 2019-08-08 | 2019-09-13 | 中山大学 | A kind of novel answer preference pattern based on GRU attention mechanism |
CN110597971A (en) * | 2019-08-22 | 2019-12-20 | 卓尔智联(武汉)研究院有限公司 | Automatic question answering device and method based on neural network and readable storage medium |
CN111144546A (en) * | 2019-10-31 | 2020-05-12 | 平安科技(深圳)有限公司 | Scoring method and device, electronic equipment and storage medium |
WO2021082861A1 (en) * | 2019-10-31 | 2021-05-06 | 平安科技(深圳)有限公司 | Scoring method and apparatus, electronic device, and storage medium |
CN111144546B (en) * | 2019-10-31 | 2024-01-02 | 平安创科科技(北京)有限公司 | Scoring method, scoring device, electronic equipment and storage medium |
CN110852119A (en) * | 2019-11-11 | 2020-02-28 | 广州点动信息科技股份有限公司 | Automatic customer service system for electronic commerce |
CN111966782A (en) * | 2020-06-29 | 2020-11-20 | 百度在线网络技术(北京)有限公司 | Retrieval method and device for multi-turn conversations, storage medium and electronic equipment |
CN111966782B (en) * | 2020-06-29 | 2023-12-12 | 百度在线网络技术(北京)有限公司 | Multi-round dialogue retrieval method and device, storage medium and electronic equipment |
US11947578B2 (en) | 2020-06-29 | 2024-04-02 | Baidu Online Network Technology (Beijing) Co., Ltd. | Method for retrieving multi-turn dialogue, storage medium, and electronic device |
CN113537206A (en) * | 2020-07-31 | 2021-10-22 | 腾讯科技(深圳)有限公司 | Pushed data detection method and device, computer equipment and storage medium |
CN113537206B (en) * | 2020-07-31 | 2023-11-10 | 腾讯科技(深圳)有限公司 | Push data detection method, push data detection device, computer equipment and storage medium |
CN115577089A (en) * | 2022-11-24 | 2023-01-06 | 零犀(北京)科技有限公司 | Method, device, equipment and storage medium for optimizing nodes in conversation process |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108038209A (en) | Answer system of selection, device and computer-readable recording medium | |
KR102173553B1 (en) | An active and Customized exercise system using deep learning technology | |
CN108038208A (en) | Training method, device and the storage medium of contextual information identification model | |
CN104615608B (en) | A kind of data mining processing system and method | |
CN106448670A (en) | Dialogue automatic reply system based on deep learning and reinforcement learning | |
CN109831572A (en) | Chat picture control method, device, computer equipment and storage medium | |
CN106844530A (en) | Training method and device of a kind of question and answer to disaggregated model | |
CN108170739A (en) | Problem matching process, terminal and computer readable storage medium | |
CN110188351A (en) | The training method and device of sentence smoothness degree and syntactic score model | |
CN109308319B (en) | Text classification method, text classification device and computer readable storage medium | |
CN108549658A (en) | A kind of deep learning video answering method and system based on the upper attention mechanism of syntactic analysis tree | |
CN104750674A (en) | Man-machine conversation satisfaction degree prediction method and system | |
CN110390107B (en) | Context relation detection method and device based on artificial intelligence and computer equipment | |
CN111309887B (en) | Method and system for training text key content extraction model | |
CN108182001A (en) | Input error correction method and device, storage medium, electronic equipment | |
CN110222328B (en) | Method, device and equipment for labeling participles and parts of speech based on neural network and storage medium | |
CN110321558A (en) | A kind of anti-cheat method and relevant device based on natural semantic understanding | |
WO2020135642A1 (en) | Model training method and apparatus employing generative adversarial network | |
CN111694937A (en) | Interviewing method and device based on artificial intelligence, computer equipment and storage medium | |
CN106909573A (en) | A kind of method and apparatus for evaluating question and answer to quality | |
CN110309114A (en) | Processing method, device, storage medium and the electronic device of media information | |
CN109857909A (en) | The method that more granularity convolution solve video conversation task from attention context network | |
CN111767394A (en) | Abstract extraction method and device based on artificial intelligence expert system | |
CN108112044A (en) | A kind of selecting method for isomeric wireless network based on Normal Fuzzy-number | |
CN104008301B (en) | A kind of field concept hierarchical structure method for auto constructing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180515 |
|
RJ01 | Rejection of invention patent application after publication |