CN108763535A - Information acquisition method and device - Google Patents

Information acquisition method and device Download PDF

Info

Publication number
CN108763535A
CN108763535A CN201810550870.4A CN201810550870A CN108763535A CN 108763535 A CN108763535 A CN 108763535A CN 201810550870 A CN201810550870 A CN 201810550870A CN 108763535 A CN108763535 A CN 108763535A
Authority
CN
China
Prior art keywords
text
matrix
answer
key content
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810550870.4A
Other languages
Chinese (zh)
Other versions
CN108763535B (en
Inventor
崔鸣
崔一鸣
马文涛
陈致鹏
何苏
王士进
胡国平
刘挺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN201810550870.4A priority Critical patent/CN108763535B/en
Publication of CN108763535A publication Critical patent/CN108763535A/en
Application granted granted Critical
Publication of CN108763535B publication Critical patent/CN108763535B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Machine Translation (AREA)

Abstract

A kind of information acquisition method of offer of the embodiment of the present invention and device, belong to natural language processing technique field.Method includes:By inquiry text and the answer text input to match with inquiry text to key content computation model, output replies the key content in text, and using key content as the answer of inquiry text;Wherein, key content computation model is obtained after being trained based on the sample key content in sample inquiry text, sample answer text and sample answer text, and key content is the answer of sample inquiry text.Due to can be according to the answer text to match with inquiry text, and the answer content putd question to can be corresponded to obtain inquiry text directly against inquiry text itself, and it is not based on inquiry text and corresponds to the Similar Problems putd question to, by the corresponding answer of Similar Problems as answer content, to which the reliability and accuracy that reply content can be improved, and improve usage experience of the user when carrying out question and answer interaction with equipment.

Description

Information acquisition method and device
Technical field
The present embodiments relate to natural language processing technique field, more particularly, to a kind of information acquisition method and Device.
Background technology
In recent years, with the development of artificial intelligence related discipline, especially computational linguistics, various question answering systems It comes into being with dialogue robot, people can obtain required letter being linked up with equipment in a manner of natural language Breath.In the related art, the Similar Problems of the current asked questions of user are typically searched in the database, and by Similar Problems pair Answer content of the answer answered as current asked questions.Since current asked questions and Similar Problems are likely to incomplete Match, causes the reliability for replying content low.
Invention content
To solve the above-mentioned problems, the embodiment of the present invention provides one kind and overcoming the above problem or solve at least partly State a kind of information acquisition method and device of problem.
According to a first aspect of the embodiments of the present invention, a kind of information acquisition method is provided, this method includes:
By inquiry text and the answer text input to match with inquiry text to key content computation model, output replies Key content in text, and using key content as the answer of inquiry text;
Wherein, key content computation model is to reply text and sample answer text based on sample inquiry text, sample In sample key content be trained after obtain, key content be sample inquiry text answer.
Method provided in an embodiment of the present invention, by calculating mould by inquiry text and with text input to key content is replied Type, output reply the key content in text, and using key content as the answer of inquiry text.Due to can according to inquiry text Originally the answer text to match, and the answer content putd question to can be corresponded to obtain inquiry text directly against inquiry text itself, And be not based on inquiry text and correspond to the Similar Problems putd question to, by the corresponding answer of Similar Problems as content is replied, so as to carry Height replies the reliability and accuracy of content, and improves usage experience of the user when carrying out question and answer interaction with equipment.
According to a second aspect of the embodiments of the present invention, a kind of information acquisition device is provided, which includes:
Data obtaining module, for by inquiry text and the answer text input to match with inquiry text to key content Computation model, output reply the key content in text, and using key content as the answer of inquiry text;
Wherein, key content computation model is to reply text and sample answer text based on sample inquiry text, sample In sample key content be trained after obtain, key content be sample inquiry text answer.
According to a third aspect of the embodiments of the present invention, a kind of information acquisition apparatus is provided, including:
At least one processor;And
At least one processor being connect with processor communication, wherein:
Memory is stored with the program instruction that can be executed by processor, and the instruction of processor caller is able to carry out first party The information acquisition method that any possible realization method is provided in the various possible realization methods in face.
According to the fourth aspect of the invention, a kind of non-transient computer readable storage medium, non-transient computer are provided Readable storage medium storing program for executing stores computer instruction, and computer instruction makes the various possible realization methods of computer execution first aspect In the information acquisition method that is provided of any possible realization method.
It should be understood that above general description and following detailed description is exemplary and explanatory, it can not Limit the embodiment of the present invention.
Description of the drawings
Fig. 1 is a kind of flow diagram of information acquisition method of the embodiment of the present invention;
Fig. 2 is a kind of flow diagram of information acquisition method of the embodiment of the present invention;
Fig. 3 is the word rank text representation matrix and sentence rank text representation matrix acquisition methods of the embodiment of the present invention Flow diagram;
Fig. 4 is the flow diagram of the sentence rank text representation matrix acquisition methods of the embodiment of the present invention;
Fig. 5 is the flow diagram of the information association matrix acquisition methods of the embodiment of the present invention;
Fig. 6 is the flow diagram of the key content acquisition methods of the embodiment of the present invention;
Fig. 7 is the flow diagram of the key content acquisition methods of the embodiment of the present invention;
Fig. 8 is a kind of flow diagram of information acquisition method of the embodiment of the present invention;
Fig. 9 is a kind of block diagram of information acquisition apparatus of the embodiment of the present invention.
Specific implementation mode
With reference to the accompanying drawings and examples, the specific implementation mode of the embodiment of the present invention is described in further detail.With Lower embodiment is not limited to the range of the embodiment of the present invention for illustrating the embodiment of the present invention.
People can obtain required information being linked up with equipment in a manner of natural language at present.In correlation In technology, the Similar Problems of the current asked questions of user are typically searched in the database, and by the corresponding answer of Similar Problems Answer content as current asked questions.Since current problem and Similar Problems are likely to Incomplete matching, cause in answer The reliability of appearance is low.For said circumstances, an embodiment of the present invention provides a kind of information acquisition methods.This method can be used for intelligence Energy question and answer scene can be used for the other scenes for needing intelligent answer function, such as Driving Scene, shopping scene, the present invention Embodiment is not especially limited this.This method can be executed by different equipment, and the embodiment of the present invention is not made this specifically yet It limits.For example, if this method is used for Driving Scene, the executive agent of this method can be mobile unit;If this method is used for Shopping scene, then the executive agent of this method can be mobile terminal.Specifically, this method includes:By inquiry text and with ask For the answer text input that inquiry text matches to key content computation model, output replies the key content in text, and will close Answer of the key content as inquiry text.
Before executing the above process, voice data when user puts question to can be first obtained, and voice is carried out to voice data Identification is to obtain inquiry text;Alternatively, text input by user can also be directly acquired and as inquiry text, the present invention is implemented Example is not especially limited this.In addition, the answer text to match with inquiry text, may include that inquiry text corresponds to answering for enquirement Multiple content.Specifically, it is putd question to if inquiry text corresponds to as how a certain function in one product of inquiry uses, with inquiry text The answer text to match can illustrate document for the product;Further, it is contemplated that usually have in a product multinomial Function, and the product illustrates to remember in document in the functional operation instruction information of product institute, if in advance according to every Function will illustrate that document splits into several structured texts, and inquiry textual queries is a certain function in product, then Can be the corresponding structured text of the function with answer text that inquiry text matches.If it is one that inquiry text, which is corresponded to and putd question to, The definition of a technical term, then it can be the technology dictionary defined comprising the technical term to reply text.
May include the unrelated redundancy of some enquirements corresponding with inquiry text due to replying in text, so as to base The key content replied in text is exported in key content computation model.Wherein, key content can be that inquiry text correspondence carries The answer content asked, can also be the subordinate sentence serial number for replying content in replying text, and the embodiment of the present invention is not made this specifically It limits.
In addition, before executing the above process, it can also train in advance and obtain key content computation model, it specifically can be by such as Under type trains to obtain:First, it collects great amount of samples inquiry text, reply text with the sample that sample inquiry text matches; Wherein, the sample key content that sample replies in text is predetermined, and corresponds to the answer putd question to for sample inquiry text Content.Text is replied based on sample inquiry text, sample and sample key content is trained initial model, to be closed Key content computation model.Wherein, initial model can be single neural network model, can also be multiple neural network models Combination, the embodiment of the present invention do not make specific limit to the type of initial model and structure.
Method provided in an embodiment of the present invention, by calculating mould by inquiry text and with text input to key content is replied Type, output reply the key content in text, and using key content as the answer of inquiry text.Due to can according to inquiry text Originally the answer text to match, and the answer content putd question to can be corresponded to obtain inquiry text directly against inquiry text itself, And be not based on inquiry text and correspond to the Similar Problems putd question to, by the corresponding answer of Similar Problems as content is replied, so as to carry Height replies the reliability and accuracy of content, and improves usage experience of the user when carrying out question and answer interaction with equipment.
It, can also by the content of above-described embodiment it is found that key content computation model can be single neural network model For the combination of multiple neural network models.It is right now by taking key content computation model is the combination of multiple neural network models as an example The process that the output of key content computation model replies the key content in text is explained.Correspondingly, it is based on above-mentioned reality The content for applying example, as a kind of alternative embodiment, the embodiment of the present invention is not to matching by inquiry text and with inquiry text Text input is replied to key content computation model, the mode that output replies the key content in text specifically limits.Referring to Fig. 1, including but not limited to:
101, inquiry text and answer text are separately input into the text representation layer in key content computation model, exported The corresponding word rank text representation matrix of inquiry text and the corresponding sentence rank text representation matrix of answer text.
Wherein, text representation layer is mainly used for learning the expression of word level problems and answer text that inquiry text corresponds to enquirement Corresponding sentence ranked documents indicate.Text representation layer can be single neural network model, such as two-way shot and long term memory network, Can be the combination of multiple neural network models, such as the two of two-way shot and long term memory network and convolutional neural networks combination, this The neural network model type and structure that inventive embodiments do not use text representation layer make specific limit.Inquiry text is corresponding Word rank text representation matrix is made of the corresponding term vector of each participle in inquiry text, word rank text representation matrix Line number it is consistent with the participle quantity of inquiry text, every a line in word rank text representation matrix corresponds to one in inquiry text The term vector of a participle.It is by the corresponding sentence of each subordinate sentence in answer text to reply the corresponding sentence rank text representation matrix of text What vector was constituted, the line number of sentence rank text representation matrix is consistent with the subordinate sentence quantity of text is replied, sentence rank text representation square The corresponding sentence vector for replying a subordinate sentence in text of every a line in battle array.
102, by the context expression layer in sentence rank text representation Input matrix to key content computation model, output is answered The corresponding context representing matrix of multiple text.
Wherein, context expression layer is mainly used for the corresponding sentence rank contextual information expression of study answer text.Up and down Literary expression layer can be shot and long term memory network, can also be two-way shot and long term memory network, the embodiment of the present invention is not to upper and lower Neural network model type used in literary expression layer makees specific limit.Context representing matrix is by each point in answer text The corresponding vector with contextual information of sentence is constituted, the line number of context representing matrix and the subordinate sentence quantity one for replying text Cause, every a line in context representing matrix it is corresponding reply a subordinate sentence in text it is corresponding carry contextual information to Amount.
103, note word rank text representation matrix and context representing matrix being input in key content computation model Meaning power layer exports inquiry text and replies the information association matrix between text.
Wherein, attention layer is mainly used for being dissolved into a rank contextual information after being weighted the expression of word level problems In expression.Specifically, attention layer calculates first corresponds to the problem of puing question to attention expression based on the inquiry text for replying text, Namely determine in inquiry text in each participle and answer text correlation between each subordinate sentence, then simultaneously by determining correlation It is combined with word rank text representation matrix and context representing matrix, you can obtain between inquiry text and answer text Information association matrix.Information association matrix is the vector indicated by the problematic attention of the corresponding band of each subordinate sentence in answer text It is constituted, the line number of information association matrix is consistent with the subordinate sentence quantity of text is replied, and information association matrix column number is equal to word grade The sum of the columns of other text representation matrix column number and context representing matrix, every a line in information association matrix is to response The vector that the problematic attention of the corresponding band of a subordinate sentence indicates in multiple text.
104, by the output layer in information association Input matrix to key content computation model, output replies the pass in text Key content.
It should be noted that in above-mentioned steps 101 to step 104, be related to text representation layer, context expression layer, Attention layer and output layer namely key content computation model can be layered by aforementioned four to be constituted, and specific layered structure can join Examine Fig. 2.
Method provided in an embodiment of the present invention, by being based on key content computation model, participle, answer from inquiry text The different angles such as the correlation between the subordinate sentence and context and three of text determine the key content replied in text, and Using key content as the answer of inquiry text, to which the reliability and accuracy that reply content can be improved, and user is improved Usage experience when carrying out question and answer interaction with equipment.
Content based on above-described embodiment, as a kind of alternative embodiment, text representation layer include word rank expression layer and Sentence rank expression layer;Correspondingly, the embodiment of the present invention by inquiry text and answer text to not being separately input into key content meter Calculate the text representation layer in model, the corresponding word rank text representation matrix of output inquiry text and the corresponding sentence of answer text The mode of rank text representation matrix specifically limits.Referring to Fig. 3, specifically include:
301, by inquiry text input to word rank expression layer, export word rank text representation matrix.
Wherein, word rank expression layer is used to learn inquiry text and corresponds to the word level problems expression putd question to, and realizes inquiry text This word rank modeling.Neural network model type used in word rank expression layer can be two-way shot and long term memory network, The embodiment of the present invention is not especially limited this.The corresponding word rank text representation matrix of inquiry text is by every in inquiry text The corresponding term vector of one participle is constituted, and the line number of word rank text representation matrix is consistent with the participle quantity of inquiry text, word Every a line in rank text representation matrix corresponds to the term vector of a participle in inquiry text.It is k with word rank expression layer For the two-way shot and long term memory network of a node, if the participle quantity of inquiry text is n, word rank text representation matrix Size is n × k.
In addition, before by inquiry text input to word rank expression layer, vectorization can be carried out to inquiry text, also will Each participle in inquiry text is converted to corresponding term vector, then the corresponding term vector of each participle in inquiry text is inputted To word rank expression layer.Wherein it is possible to pass through the corresponding term vector of each participle in vectorial dictionary enquiry inquiry text;Alternatively, Can also be first several words by each participle cutting, by the corresponding word vector of each word of word vector dictionary enquiry, then will be every Whole word vectors of one participle carry out convolution sum pondization and calculate, to obtain the corresponding term vector of each participle.It is, of course, also possible to The corresponding term vector of each participle is directly obtained by the term vectors calculating instrument such as word2vec, the embodiment of the present invention does not make this It is specific to limit.
302, text input will be replied to sentence rank expression layer, output replies the corresponding sentence rank text representation square of text Battle array.
Wherein, sentence rank expression layer replies the corresponding sentence ranked documents expression of text for learning, and realizes and replies text Sentence rank modeling.Neural network model type used in sentence rank expression layer can be convolutional neural networks, and the present invention is implemented Example is not especially limited this.Replying the corresponding sentence rank text representation matrix of text is corresponded to by each subordinate sentence in answer text Sentence vector constitute, the line number of sentence rank text representation matrix is consistent with the answer subordinate sentence quantity of text, sentence rank text table Show the corresponding sentence vector for replying a subordinate sentence in text of every a line in matrix.
In addition, before it will reply text input to sentence rank expression layer, vectorization can be carried out to replying text, also will It replies each subordinate sentence in text and is converted to corresponding sentence vector, then the corresponding sentence vector of each subordinate sentence in answer text is inputted To sentence rank expression layer.Wherein it is possible to by the corresponding term vector of each participle in each subordinate sentence of term vector dictionary enquiry again into Row combination, to obtain the sentence vector of each subordinate sentence;Alternatively, can also directly be obtained by the term vectors calculating instrument such as word2vec The corresponding term vector of each participle is combined again in subordinate sentence, and the embodiment of the present invention is not especially limited this.
It should be noted that the above process is to first carry out step 301, then execute step 302, namely first to inquiry text into Row processing, then handle replying text.Step 302 can also be first carried out during actual implementation, then executes step 301, The embodiment of the present invention is not defined the execution sequence of step 301 and step 302.
Method provided in an embodiment of the present invention is carried out by having carried out the modeling of word rank to inquiry text to replying text The modeling of sentence rank, to helping to improve the computational accuracy of key content computation model, and then can be improved reply content can By property and accuracy, and improve usage experience of the user when carrying out question and answer interaction with equipment.
Content based on above-described embodiment, as a kind of alternative embodiment, the embodiment of the present invention is not defeated to that will reply text Enter to sentence rank expression layer, the mode that output replies the corresponding sentence rank text representation matrix of text specifically limits.Referring to figure 4, it specifically includes:
401, the corresponding sentence vector of each subordinate sentence in text will be replied and be input to convolutional layer, export the corresponding language of each subordinate sentence Sentence representing matrix.
Wherein, the corresponding sentence representing matrix of each subordinate sentence is in the same size, and sentence indicates that matrix column number is to reply text The maximum value of subordinate sentence length in this.By convolutional layer be convolution nuclear volume for the convolutional neural networks of k for, it is assumed that reply text in The maximum value of subordinate sentence length is n, then the size of sentence representing matrix is n × k.
402, the corresponding sentence of each subordinate sentence is indicated that Input matrix to pond layer, exports the corresponding sentence rank of each subordinate sentence Indicate vector.
Wherein, pond layer is used to carry out pondization to the corresponding sentence representing matrix of each subordinate sentence to calculate specifically, for appointing The corresponding sentence representing matrix of one subordinate sentence, chooses the maximum value of each column in the corresponding sentence representing matrix of the subordinate sentence, and by the language The maximum value of sentence representing matrix each column sequentially forms the corresponding sentence rank of the subordinate sentence and indicates vector.If sentence indicates matrix column number For k, then the length that sentence rank indicates vectorial is all k.
403, the corresponding sentence rank of each subordinate sentence in document will be replied and indicate that vector is combined, obtain a rank text table Show matrix.
Wherein, if the subordinate sentence quantity for replying text is m, sentence rank indicates that the length of vector is k, then will reply every in document The corresponding sentence rank of one subordinate sentence indicates that vector is combined, and the size of obtained sentence rank text representation matrix is m × k.In addition, Due to the above process it is found that sentence rank expression layer may include convolutional layer and pond layer.Certainly, sentence rank expression layer can also be by it Its a variety of different layering composition, or only include a layering, the embodiment of the present invention is not especially limited this.
Content based on above-described embodiment, as a kind of alternative embodiment, the embodiment of the present invention is not to by word rank text Representing matrix and context representing matrix are input to the attention layer in key content computation model, output inquiry text and answer The mode of information association matrix between text specifically limits.Referring to Fig. 5, specifically include:
501, word-based rank text representation matrix and context representing matrix obtain correlation matrix, to correlation square Battle array carries out softmax calculating, the power that gains attention representing matrix.
Wherein, correlation matrix, which is used to characterize, replies the sentence vector of each subordinate sentence and each participle in inquiry text in document Term vector between similarity, reply each participle in sentence vector and the inquiry text of each subordinate sentence in document term vector it Between similarity computational methods there are many, as Euclidean distance algorithm, manhatton distance algorithm and bright Koffsky distance calculate Method etc., the embodiment of the present invention is not especially limited this.
Preferably, the embodiment of the present invention is by calculating cosine similarity, using the sentence as each subordinate sentence in answer document Similarity between the term vector of each participle in vector and inquiry text:
sij=cosine (ci,qj)
In formula, sijFor the element that the i-th row jth in correlation matrix arranges, sijIndicate to reply i-th of subordinate sentence in document Sentence vector ciWith the term vector q of j-th of participle in inquiry textjBetween similarity.
Attention representing matrix, which is used to characterize, replies the sentence vector of each subordinate sentence and each participle in inquiry text in document Term vector between normalization degree of relevancy.If the quantity for replying subordinate sentence in text is m, the quantity segmented in inquiry text For n, then the size of attention representing matrix and correlation matrix is m × n.
502, word rank text representation matrix is multiplied with attention representing matrix, obtains the expression of word rank attention Word rank attention representing matrix is spliced with context representing matrix, obtains information association matrix by matrix.
Wherein, word rank attention representing matrix is the corresponding word rank text weighted by attention mechanism of inquiry text This representing matrix, if the size of word rank text representation matrix is n × k, the size of attention representing matrix is m × n, then word grade The size of other attention representing matrix is m × k.Word rank attention representing matrix is being spliced with context representing matrix When, word rank attention representing matrix can splice before context representing matrix, or splicing indicates square in context Behind battle array, the embodiment of the present invention is not especially limited this.Information association matrix is corresponded to by each subordinate sentence in answer text The vector that indicates of the problematic attention of band constituted, the line number of information association matrix is consistent with the answer subordinate sentence quantity of text, Information association matrix column number is equal to the sum of the columns of word rank text representation matrix column number and context representing matrix, information The corresponding vector for replying the problematic attention expression of the corresponding band of a subordinate sentence in text of every a line in incidence matrix.Assuming that The size of context representing matrix is m × k, then the size of information association matrix is m × 2k.
Content based on above-described embodiment, as a kind of alternative embodiment, the embodiment of the present invention is not to by information association square Battle array is input to the output layer in key content computation model, and the mode that output replies the key content in text specifically limits. Referring to Fig. 6, specifically include:
601, dimensionality reduction is carried out to replying each subordinate sentence corresponding row vector in information association matrix in text, is obtained each The corresponding first output constant of each subordinate sentence is carried out softmax calculating, obtains each point by the corresponding first output constant of subordinate sentence Initial probability of the sentence as start statement in key content.
Wherein, it is with problematic attention table to reply each subordinate sentence corresponding row vector in information association matrix in text The vector shown, row vector are that length is 2k.There are many ways to carrying out dimensionality reduction to row vector, as Principal Component Analysis Algorithm, LDA are calculated Method and Local Liner Prediction etc., the embodiment of the present invention is not especially limited this.It preferably, can be by row vector The method summed after being weighted per bit element carries out dimensionality reduction, obtains the first output constant.Step 601 can pass through following formula table Show:
Pstart-i=softmax (w*hi);
In formula, Pstart-iTo reply initial probability of i-th of subordinate sentence as start statement in key content in text, w is One group of weight, hiTo reply i-th of subordinate sentence corresponding row vector in information association matrix in text.
602, the corresponding initial probability of each subordinate sentence is multiplied with information association matrix, the matrix that will be obtained after multiplication Spliced with information association matrix, obtain spliced matrix, to each subordinate sentence in answer text in spliced matrix Corresponding row vector carries out dimensionality reduction, obtains the corresponding second output constant of each subordinate sentence, by corresponding second output of each subordinate sentence Constant carries out softmax calculating, obtains end probability of each subordinate sentence as END in key content.
It, can will be after multiplication it should be noted that when obtained matrix is spliced with information association matrix after being multiplied Obtained matrix splices before information association matrix, or can also splice behind information association matrix, the present invention Embodiment is not especially limited this.Wherein, the matrix size obtained after multiplication be m × 2k, spliced matrix size be m × 4k.There are many ways to carrying out dimensionality reduction to row vector in spliced matrix, such as Principal Component Analysis Algorithm, LDA algorithm and part Linearly embedding algorithm etc., the embodiment of the present invention is not especially limited this.
Preferably, dimensionality reduction can be carried out by the method summed after being weighted to every bit element in row vector, obtains the Two output constants.In step 602, the end that each subordinate sentence is used as END in key content is obtained by spliced matrix The method of probability can be indicated by following formula:
Pend-i=softmax (w ' * h 'i);
In formula, Pend-iTo reply end probability of i-th of subordinate sentence as END in key content in text, w ' is one Group weight, h 'iTo reply i-th of subordinate sentence corresponding row vector in spliced matrix in text.
603, based on the corresponding initial probability of each subordinate sentence and terminate probability, obtains key content.
Method provided in an embodiment of the present invention, by determining the initial probability of each subordinate sentence in answer text and terminating general Rate, then based on initial probability and terminate determine the probability key content, to help to improve the calculating of key content computation model Precision, and then the reliability and accuracy for replying content can be improved.
Content based on above-described embodiment, as a kind of alternative embodiment, the embodiment of the present invention is not to being based on each subordinate sentence Corresponding initial probability and end probability, the mode for obtaining key content specifically limit.With reference to figure 7, specifically include:
701, according to the corresponding initial probability of each subordinate sentence, the corresponding subordinate sentence of maximum initial probability is chosen in text from replying As the start statement of key content, according to the corresponding end probability of each subordinate sentence, maximum terminate generally is chosen in text from replying END of the corresponding subordinate sentence of rate as key content.
702, the subordinate sentence in text between start statement and END, start statement and conclusion will be replied Sentence, as key content.
Specifically, for replying the subordinate sentence quantity in text and be 8, the corresponding initial probability of each subordinate sentence and terminate probability It can be as shown in the table.By following table it is found that the corresponding initial probability highest of the 4th subordinate sentence, the corresponding end probability of the 6th subordinate sentence is most Height, thus using the 4th subordinate sentence as the start statement of key content, it, will using the 6th subordinate sentence as the END of key content 4th, 5 and 6 subordinate sentence is as key content.In addition, due to the corresponding initial probability of each subordinate sentence in answer text and terminating general Rate is calculated by softmax functions, is 1 to reply the sum of corresponding initial probability of all subordinate sentences in text, is replied text The sum of corresponding ends probability of each subordinate sentence is also 1 in this.If it is 8 to reply the subordinate sentence quantity in text, each subordinate sentence corresponds to Initial probability and terminate probability can refer to such as the following table 1:
Table 1
It should be noted that above-mentioned all alternative embodiments, may be used the optional implementation that any combination forms the present invention Example, this is no longer going to repeat them.
In order to better understand with apply information acquisition method proposed by the present invention, in conjunction with the content of above-described embodiment, The present invention is explained information access process with following example, is specifically described as follows:
With reference to figure 8, inquiry text and answer text are inputted to the vectorial embeding layer of key content computation model respectively first, To inquiry text and text progress vectorization is replied, each participle in inquiry text, which will be converted to corresponding term vector, to be replied Each subordinate sentence in text is converted to corresponding sentence vector.
Then, the word rank in the corresponding term vector input text representation layer of each participle in inquiry text is indicated Layer.Wherein, word rank expression layer is the two-way shot and long term memory network for including k node.If the participle number in inquiry text N is measured, then word rank expression layer output size is the word rank text representation matrix of n × k.
The corresponding sentence vector of each subordinate sentence in text will be replied to be input in the sentence rank expression layer in text representation layer Convolutional layer exports the corresponding sentence representing matrix of each subordinate sentence.Wherein, convolutional layer is the convolutional Neural net that convolution nuclear volume is k Network, if the maximum value for replying subordinate sentence length in text is n, the size that sentence states matrix is n × k.Each subordinate sentence is corresponded to Sentence indicate Input matrix to text representation layer in sentence rank expression layer in pond layer, export the corresponding sentence of each subordinate sentence Rank indicates vector.The corresponding sentence rank of each subordinate sentence in document will be replied and indicate that vector is combined, obtain a rank text Representing matrix.If the quantity for replying subordinate sentence in text is m, sentence rank indicates that the length of vector is k, then sentence rank text representation square The size of battle array is m × k.
Then, by the context expression layer in sentence rank text representation Input matrix to key content computation model, output Reply the corresponding context representing matrix of text.Wherein, context expression layer is the two-way shot and long term memory for including k node The size of network, context representing matrix is m × k.
Attention word rank text representation matrix and context representing matrix being input in key content computation model Layer, word-based rank text representation matrix and context representing matrix obtain correlation matrix, are carried out to correlation matrix Softmax is calculated, and word rank text representation matrix is multiplied by the power that gains attention representing matrix with attention representing matrix, Word rank attention representing matrix is obtained, word rank attention representing matrix is spliced with context representing matrix, is exported Information association matrix.If the size of word rank text representation matrix is n × k, the size of attention representing matrix is m × n, then word The size of rank attention representing matrix is m × k.If the size of context representing matrix is m × k, information association matrix Size is m × 2k.
Finally, each in text to replying by the output layer in information association Input matrix to key content computation model Subordinate sentence corresponding row vector in information association matrix carries out dimensionality reduction, obtains the corresponding first output constant of each subordinate sentence, will be every The corresponding first output constant of one subordinate sentence carries out softmax calculating, obtains each subordinate sentence as start statement in key content Initial probability.The corresponding initial probability of each subordinate sentence is multiplied with information association matrix, by the matrix obtained after multiplication with Information association matrix is spliced, and spliced matrix is obtained.It is right in spliced matrix to replying each subordinate sentence in text The row vector answered carries out dimensionality reduction, obtains the corresponding second output constant of each subordinate sentence.Corresponding second output of each subordinate sentence is normal Amount carries out softmax calculating, obtains end probability of each subordinate sentence as END in key content.According to each subordinate sentence pair The initial probability answered chooses start statement of the corresponding subordinate sentence of maximum initial probability as key content from answer text.Root According to the corresponding end probability of each subordinate sentence, the maximum corresponding subordinate sentence of probability that terminates is chosen in text as key content from replying END.The subordinate sentence in text between start statement and END, start statement and END will be replied, made For key content.
Method provided in an embodiment of the present invention, by calculating mould by inquiry text and with text input to key content is replied Type, output reply the key content in text, and using key content as the answer of inquiry text.Due to can according to inquiry text Originally the answer text to match, and the answer content putd question to can be corresponded to obtain inquiry text directly against inquiry text itself, And be not based on inquiry text and correspond to the Similar Problems putd question to, by the corresponding answer of Similar Problems as content is replied, so as to carry Height replies the reliability and accuracy of content, and improves usage experience of the user when carrying out question and answer interaction with equipment.
Content based on above-described embodiment, an embodiment of the present invention provides a kind of information acquisition device, acquisition of information dresses It sets for executing the information acquisition method provided in above method embodiment.The device includes:
Data obtaining module, for by inquiry text and the answer text input to match with inquiry text to key content Computation model, output reply the key content in text, and using key content as the answer of inquiry text;
Wherein, key content computation model is to reply text and sample answer text based on sample inquiry text, sample In sample key content be trained after obtain, key content be sample inquiry text answer.
As a kind of alternative embodiment, data obtaining module, including:
Text representation unit, for inquiry text and answer text to be separately input into the text in key content computation model This expression layer, the corresponding word rank text representation matrix of output inquiry text and the corresponding sentence rank text representation of answer text Matrix;
Context indicate unit, for will in sentence rank text representation Input matrix to key content computation model up and down Literary expression layer, output reply the corresponding context representing matrix of text;
Attention unit is calculated for word rank text representation matrix and context representing matrix to be input to key content Attention layer in model exports inquiry text and replies the information association matrix between text;
Output unit, for by the output layer in information association Input matrix to key content computation model, output to reply Key content in text.
As a kind of alternative embodiment, text representation layer includes word rank expression layer and sentence rank expression layer;Correspondingly, literary This expression unit includes:
Word rank indicates subelement, for by inquiry text input to word rank expression layer, exporting word rank text representation Matrix;
Sentence rank indicates subelement, and for that will reply text input to sentence rank expression layer, it is corresponding that output replies text Sentence rank text representation matrix.
As a kind of alternative embodiment, sentence rank indicates subelement, for that will reply the corresponding sentence of each subordinate sentence in text Vector is input to convolutional layer, exports the corresponding sentence representing matrix of each subordinate sentence;By the corresponding sentence representing matrix of each subordinate sentence It is input to pond layer, the corresponding sentence rank of each subordinate sentence is exported and indicates vector;By the corresponding sentence grade of each subordinate sentence in answer document Not Biao Shi vector be combined, obtain a rank text representation matrix.
As a kind of alternative embodiment, attention unit is used for:Word-based rank text representation matrix and context indicate Matrix obtains correlation matrix, and softmax calculating, the power that gains attention representing matrix are carried out to correlation matrix;By word rank text This representing matrix is multiplied with attention representing matrix, obtains word rank attention representing matrix, by word rank attention table Show that matrix is spliced with context representing matrix, obtains information association matrix.
As a kind of alternative embodiment, output unit, including:
Initial probability obtain subelement, for reply text in each subordinate sentence in information association matrix corresponding row to Amount carries out dimensionality reduction, obtains the corresponding first output constant of each subordinate sentence, and the corresponding first output constant of each subordinate sentence is carried out Softmax is calculated, and obtains initial probability of each subordinate sentence as start statement in key content;
Terminate probability and obtain subelement, for the corresponding initial probability of each subordinate sentence and information association matrix to be carried out phase Multiply, the matrix obtained after multiplication and information association matrix are spliced, obtains spliced matrix, it is each in text to replying Subordinate sentence corresponding row vector in spliced matrix carries out dimensionality reduction, obtains the corresponding second output constant of each subordinate sentence, will be every The corresponding second output constant of one subordinate sentence carries out softmax calculating, obtains each subordinate sentence as END in key content Terminate probability;
Key content obtains subelement, for based on the corresponding initial probability of each subordinate sentence and end probability, obtaining crucial Content.
As a kind of alternative embodiment, key content obtains subelement, is used for according to the corresponding initial probability of each subordinate sentence, From the start statement for choosing the corresponding subordinate sentence of maximum initial probability in text as key content is replied, corresponded to according to each subordinate sentence End probability, the maximum END for terminating the corresponding subordinate sentence of probability as key content is chosen in text from replying;It will answer Subordinate sentence in multiple text between start statement and END, start statement and END, as key content.
Device provided in an embodiment of the present invention, by calculating mould by inquiry text and with text input to key content is replied Type, output reply the key content in text, and using key content as the answer of inquiry text.Due to can according to inquiry text Originally the answer text to match, and the answer content putd question to can be corresponded to obtain inquiry text directly against inquiry text itself, And be not based on inquiry text and correspond to the Similar Problems putd question to, by the corresponding answer of Similar Problems as content is replied, so as to carry Height replies the reliability and accuracy of content, and improves usage experience of the user when carrying out question and answer interaction with equipment.
Secondly, by being based on key content computation model, from the participle of inquiry text, the subordinate sentence and up and down of text is replied The different angles such as the correlation between text and three determine the key content replied in text, and using key content as asking The answer for asking text to which the reliability and accuracy that reply content can be improved, and improves user and is carrying out question and answer with equipment Usage experience when interaction.
In addition, by having carried out the modeling of word rank to inquiry text, the modeling of sentence rank is carried out to replying text, to have Help improve the computational accuracy of key content computation model, and then the reliability and accuracy for replying content can be improved, and improves Usage experience of the user when carrying out question and answer interactions with equipment.
Finally, reply the initial probability of each subordinate sentence in text by determining and terminate probability, then based on initial probability and Terminate determine the probability key content, to help to improve the computational accuracy of key content computation model, and then answer can be improved The reliability and accuracy of content.
An embodiment of the present invention provides a kind of information acquisition apparatus.Referring to Fig. 9, which includes:Processor (processor) 901, memory (memory) 902 and bus 903;
Wherein, processor 901 and memory 902 complete mutual communication by bus 903 respectively;Processor 901 is used In calling the program instruction in memory 902, to execute the information acquisition method that above-described embodiment is provided, such as including:It will To key content computation model, output replies the pass in text for inquiry text and the answer text input that matches with inquiry text Key content, and using key content as the answer of inquiry text;Wherein, key content computation model is based on sample inquiry text Originally, sample is replied after the sample key content in text and sample answer text is trained and is obtained, and key content is sample The answer of inquiry text.
The embodiment of the present invention provides a kind of non-transient computer readable storage medium, the non-transient computer readable storage medium Matter stores computer instruction, which makes computer execute the information acquisition method that above-described embodiment is provided, such as Including:By inquiry text and the answer text input to match with inquiry text to key content computation model, output replies text Key content in this, and using key content as the answer of inquiry text;Wherein, key content computation model is to be based on sample Inquiry text, sample are replied after the sample key content in text and sample answer text is trained and are obtained, key content For the answer of sample inquiry text.
One of ordinary skill in the art will appreciate that:Realize that all or part of step of above method embodiment can pass through The relevant hardware of program instruction is completed, and program above-mentioned can be stored in a computer read/write memory medium, the program When being executed, step including the steps of the foregoing method embodiments is executed;And storage medium above-mentioned includes:ROM, RAM, magnetic disc or light The various media that can store program code such as disk.
The embodiments such as information acquisition apparatus described above are only schematical, wherein illustrate as separating component Unit may or may not be physically separated, and the component shown as unit may or may not be object Manage unit, you can be located at a place, or may be distributed over multiple network units.It can select according to the actual needs Some or all of module therein is selected to achieve the purpose of the solution of this embodiment.Those of ordinary skill in the art are not paying wound In the case of the labour for the property made, you can to understand and implement.
Through the above description of the embodiments, those skilled in the art can be understood that each embodiment can It is realized by the mode of software plus required general hardware platform, naturally it is also possible to pass through hardware.Based on this understanding, on Stating technical solution, substantially the part that contributes to existing technology can be expressed in the form of software products in other words, should Computer software product can store in a computer-readable storage medium, such as ROM/RAM, magnetic disc, CD, including several fingers It enables and using so that a computer equipment (can be personal computer, server or the network equipment etc.) executes each implementation Certain Part Methods of example or embodiment.
Finally, the present processes are only preferable embodiment, are not intended to limit the protection model of the embodiment of the present invention It encloses.With within principle, any modification, equivalent replacement, improvement and so on should be included in all spirit in the embodiment of the present invention Within the protection domain of the embodiment of the present invention.

Claims (10)

1. a kind of information acquisition method, which is characterized in that including:
By inquiry text and the answer text input to match with the inquiry text to key content computation model, described in output The key content in text is replied, and using the key content as the answer of the inquiry text;
Wherein, the key content computation model is to reply text and sample answer based on sample inquiry text, sample Sample key content in text obtains after being trained, and the key content is the answer of the sample inquiry text.
2. according to the method described in claim 1, it is characterized in that, described match by inquiry text and with the inquiry text Answer text input to key content computation model, export the key content replied in text, including:
The inquiry text and the answer text are separately input into the text representation layer in the key content computation model, Export the corresponding word rank text representation matrix of the inquiry text and the corresponding sentence rank text representation of the answer text Matrix;
By the context expression layer in the sentence rank text representation Input matrix to the key content computation model, institute is exported It states and replies the corresponding context representing matrix of text;
Institute's word level text representation matrix and the context representing matrix are input in the key content computation model Attention layer, export the inquiry text and it is described reply text between information association matrix;
Described information incidence matrix is input to the output layer in the key content computation model, is exported in the answer text Key content.
3. according to the method described in claim 2, it is characterized in that, the text representation layer includes word rank expression layer and sentence grade Other expression layer;Correspondingly, described that the inquiry text and the answer text are separately input into the key content calculating mould Text representation layer in type, exports the corresponding word rank text representation matrix of the inquiry text and the answer text corresponds to Sentence rank text representation matrix, including:
By the inquiry text input to institute's word level expression layer, output institute word level text representation matrix;
By the answer text input to the sentence rank expression layer, the corresponding sentence rank text representation of the answer text is exported Matrix.
4. according to the method described in claim 3, it is characterized in that, described by the answer text input to the sentence rank table Show layer, exports the corresponding sentence rank text representation matrix of the answer text, including:
The corresponding sentence vector of each subordinate sentence in the answer text is input to convolutional layer, exports the corresponding statement list of each subordinate sentence Show matrix;
The corresponding sentence of each subordinate sentence is indicated into Input matrix to pond layer, export the corresponding sentence rank of each subordinate sentence indicate to Amount;
The corresponding sentence rank of each subordinate sentence in the answer document is indicated that vector is combined, obtains the sentence rank text table Show matrix.
5. according to the method described in claim 2, it is characterized in that, it is described by institute's word level text representation matrix and it is described on Hereafter representing matrix is input to the attention layer in the key content computation model, exports the inquiry text and the answer Information association matrix between text, including:
Based on institute's word level text representation matrix and the context representing matrix, correlation matrix is obtained, to the correlation Property matrix carry out softmax calculating, the power that gains attention representing matrix;
Institute's word level text representation matrix is multiplied with the attention representing matrix, obtains the expression of word rank attention Matrix splices institute's word level attention representing matrix and the context representing matrix, obtains described information association Matrix.
6. according to the method described in claim 2, it is characterized in that, described be input to the key by described information incidence matrix Output layer in content computation model exports the key content replied in text, including:
To each subordinate sentence in the answer text, corresponding row vector carries out dimensionality reduction in described information incidence matrix, obtains each The corresponding first output constant of each subordinate sentence is carried out softmax calculating, obtains each point by the corresponding first output constant of subordinate sentence Initial probability of the sentence as start statement in the key content;
The corresponding initial probability of each subordinate sentence is multiplied with described information incidence matrix, by the matrix obtained after multiplication and institute It states information association matrix to be spliced, obtains spliced matrix, to each subordinate sentence in the answer text after the splicing Matrix in corresponding row vector carry out dimensionality reduction, obtain the corresponding second output constant of each subordinate sentence, each subordinate sentence is corresponding Second output constant carries out softmax calculating, obtains end probability of each subordinate sentence as END in the key content;
Based on the corresponding initial probability of each subordinate sentence and terminate probability, obtains the key content.
7. according to the method described in claim 6, it is characterized in that, described be based on the corresponding initial probability of each subordinate sentence and end Probability obtains the key content, including:
According to the corresponding initial probability of each subordinate sentence, the corresponding subordinate sentence conduct of maximum initial probability is chosen from the answer text The start statement of the key content chooses maximum knot according to the corresponding end probability of each subordinate sentence from the answer text END of the corresponding subordinate sentence of beam probability as the key content;
By subordinate sentence between the start statement and the END in the answer text, the start statement and The END, as the key content.
8. a kind of information acquisition device, which is characterized in that including:
Data obtaining module, for by inquiry text and the answer text input to match with the inquiry text to key content Computation model exports the key content replied in text, and using the key content as the answer of the inquiry text;
Wherein, the key content computation model is to reply text and sample answer based on sample inquiry text, sample Sample key content in text obtains after being trained, and the key content is the answer of the sample inquiry text.
9. a kind of information acquisition apparatus, which is characterized in that including:
At least one processor;And
At least one processor being connect with the processor communication, wherein:
The memory is stored with the program instruction that can be executed by the processor, and the processor calls described program to instruct energy Enough methods executed as described in claim 1 to 7 is any.
10. a kind of non-transient computer readable storage medium, which is characterized in that the non-transient computer readable storage medium is deposited Computer instruction is stored up, the computer instruction makes the computer execute the method as described in claim 1 to 7 is any.
CN201810550870.4A 2018-05-31 2018-05-31 Information acquisition method and device Active CN108763535B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810550870.4A CN108763535B (en) 2018-05-31 2018-05-31 Information acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810550870.4A CN108763535B (en) 2018-05-31 2018-05-31 Information acquisition method and device

Publications (2)

Publication Number Publication Date
CN108763535A true CN108763535A (en) 2018-11-06
CN108763535B CN108763535B (en) 2020-02-07

Family

ID=64001576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810550870.4A Active CN108763535B (en) 2018-05-31 2018-05-31 Information acquisition method and device

Country Status (1)

Country Link
CN (1) CN108763535B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110134967A (en) * 2019-05-22 2019-08-16 北京金山数字娱乐科技有限公司 Text handling method, calculates equipment and computer readable storage medium at device
CN110297885A (en) * 2019-05-27 2019-10-01 中国科学院深圳先进技术研究院 Generation method, device, equipment and the storage medium of real-time event abstract
CN110633359A (en) * 2019-09-04 2019-12-31 北京百分点信息科技有限公司 Sentence equivalence judgment method and device
CN110765244A (en) * 2019-09-18 2020-02-07 平安科技(深圳)有限公司 Method and device for acquiring answering, computer equipment and storage medium
CN111291549A (en) * 2020-05-08 2020-06-16 腾讯科技(深圳)有限公司 Text processing method and device, storage medium and electronic equipment
CN111683174A (en) * 2020-06-01 2020-09-18 信雅达系统工程股份有限公司 Incoming call processing method, device and system
CN112685548A (en) * 2020-12-31 2021-04-20 中科讯飞互联(北京)信息科技有限公司 Question answering method, electronic device and storage device
CN112685543A (en) * 2019-10-18 2021-04-20 普天信息技术有限公司 Method and device for answering questions based on text
US20210256018A1 (en) * 2018-04-23 2021-08-19 Nippon Telegraph And Telephone Corporation Question responding apparatus, question responding method and program
WO2022048174A1 (en) * 2020-09-03 2022-03-10 平安科技(深圳)有限公司 Text matching method and apparatus, computer device, and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332499A1 (en) * 2009-06-26 2010-12-30 Iac Search & Media, Inc. Method and system for determining confidence in answer for search
CN106708932A (en) * 2016-11-21 2017-05-24 百度在线网络技术(北京)有限公司 Abstract extraction method and apparatus for reply of question and answer website
CN106777236A (en) * 2016-12-27 2017-05-31 北京百度网讯科技有限公司 The exhibiting method and device of the Query Result based on depth question and answer
CN107329995A (en) * 2017-06-08 2017-11-07 北京神州泰岳软件股份有限公司 A kind of controlled answer generation method of semanteme, apparatus and system
CN107844533A (en) * 2017-10-19 2018-03-27 云南大学 A kind of intelligent Answer System and analysis method
CN107870964A (en) * 2017-07-28 2018-04-03 北京中科汇联科技股份有限公司 A kind of sentence sort method and system applied to answer emerging system
CN108052588A (en) * 2017-12-11 2018-05-18 浙江大学城市学院 A kind of construction method of the document automatically request-answering system based on convolutional neural networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100332499A1 (en) * 2009-06-26 2010-12-30 Iac Search & Media, Inc. Method and system for determining confidence in answer for search
CN106708932A (en) * 2016-11-21 2017-05-24 百度在线网络技术(北京)有限公司 Abstract extraction method and apparatus for reply of question and answer website
CN106777236A (en) * 2016-12-27 2017-05-31 北京百度网讯科技有限公司 The exhibiting method and device of the Query Result based on depth question and answer
CN107329995A (en) * 2017-06-08 2017-11-07 北京神州泰岳软件股份有限公司 A kind of controlled answer generation method of semanteme, apparatus and system
CN107870964A (en) * 2017-07-28 2018-04-03 北京中科汇联科技股份有限公司 A kind of sentence sort method and system applied to answer emerging system
CN107844533A (en) * 2017-10-19 2018-03-27 云南大学 A kind of intelligent Answer System and analysis method
CN108052588A (en) * 2017-12-11 2018-05-18 浙江大学城市学院 A kind of construction method of the document automatically request-answering system based on convolutional neural networks

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11693854B2 (en) * 2018-04-23 2023-07-04 Nippon Telegraph And Telephone Corporation Question responding apparatus, question responding method and program
US20210256018A1 (en) * 2018-04-23 2021-08-19 Nippon Telegraph And Telephone Corporation Question responding apparatus, question responding method and program
CN110134967A (en) * 2019-05-22 2019-08-16 北京金山数字娱乐科技有限公司 Text handling method, calculates equipment and computer readable storage medium at device
CN110297885B (en) * 2019-05-27 2021-08-17 中国科学院深圳先进技术研究院 Method, device and equipment for generating real-time event abstract and storage medium
CN110297885A (en) * 2019-05-27 2019-10-01 中国科学院深圳先进技术研究院 Generation method, device, equipment and the storage medium of real-time event abstract
CN110633359A (en) * 2019-09-04 2019-12-31 北京百分点信息科技有限公司 Sentence equivalence judgment method and device
CN110633359B (en) * 2019-09-04 2022-03-29 北京百分点科技集团股份有限公司 Sentence equivalence judgment method and device
CN110765244B (en) * 2019-09-18 2023-06-06 平安科技(深圳)有限公司 Method, device, computer equipment and storage medium for obtaining answering operation
CN110765244A (en) * 2019-09-18 2020-02-07 平安科技(深圳)有限公司 Method and device for acquiring answering, computer equipment and storage medium
CN112685543A (en) * 2019-10-18 2021-04-20 普天信息技术有限公司 Method and device for answering questions based on text
CN112685543B (en) * 2019-10-18 2024-01-26 普天信息技术有限公司 Method and device for answering questions based on text
CN111291549A (en) * 2020-05-08 2020-06-16 腾讯科技(深圳)有限公司 Text processing method and device, storage medium and electronic equipment
CN111683174A (en) * 2020-06-01 2020-09-18 信雅达系统工程股份有限公司 Incoming call processing method, device and system
WO2022048174A1 (en) * 2020-09-03 2022-03-10 平安科技(深圳)有限公司 Text matching method and apparatus, computer device, and storage medium
CN112685548A (en) * 2020-12-31 2021-04-20 中科讯飞互联(北京)信息科技有限公司 Question answering method, electronic device and storage device
CN112685548B (en) * 2020-12-31 2023-09-08 科大讯飞(北京)有限公司 Question answering method, electronic device and storage device

Also Published As

Publication number Publication date
CN108763535B (en) 2020-02-07

Similar Documents

Publication Publication Date Title
CN108763535A (en) Information acquisition method and device
CN111159416B (en) Language task model training method and device, electronic equipment and storage medium
CN108959396B (en) Machine reading model training method and device and question and answer method and device
CN109522553B (en) Named entity identification method and device
CN110321419B (en) Question-answer matching method integrating depth representation and interaction model
CN108052588B (en) Method for constructing automatic document question-answering system based on convolutional neural network
CN106202010B (en) Method and apparatus based on deep neural network building Law Text syntax tree
CN104598611B (en) The method and system being ranked up to search entry
CN107818164A (en) A kind of intelligent answer method and its system
CN109582767A (en) Conversational system processing method, device, equipment and readable storage medium storing program for executing
CN110096711A (en) The natural language semantic matching method of the concern of the sequence overall situation and local dynamic station concern
CN109271493A (en) A kind of language text processing method, device and storage medium
CN107679082A (en) Question and answer searching method, device and electronic equipment
CN111966812B (en) Automatic question answering method based on dynamic word vector and storage medium
CN106910497A (en) A kind of Chinese word pronunciation Forecasting Methodology and device
CN110990555B (en) End-to-end retrieval type dialogue method and system and computer equipment
CN113239169A (en) Artificial intelligence-based answer generation method, device, equipment and storage medium
CN110019736A (en) Question and answer matching process, system, equipment and storage medium based on language model
CN108959388A (en) information generating method and device
CN115062134B (en) Knowledge question-answering model training and knowledge question-answering method, device and computer equipment
CN113342958A (en) Question-answer matching method, text matching model training method and related equipment
CN108805260A (en) A kind of figure says generation method and device
CN112307048A (en) Semantic matching model training method, matching device, equipment and storage medium
CN114490926A (en) Method and device for determining similar problems, storage medium and terminal
CN109948163A (en) The natural language semantic matching method that sequence dynamic is read

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant