CN106294505A - A kind of method and apparatus feeding back answer - Google Patents
A kind of method and apparatus feeding back answer Download PDFInfo
- Publication number
- CN106294505A CN106294505A CN201510316013.4A CN201510316013A CN106294505A CN 106294505 A CN106294505 A CN 106294505A CN 201510316013 A CN201510316013 A CN 201510316013A CN 106294505 A CN106294505 A CN 106294505A
- Authority
- CN
- China
- Prior art keywords
- answer
- extraction
- semantics
- parameter
- trained
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
- G06F16/334—Query execution
- G06F16/3344—Query execution using natural language analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3325—Reformulation based on results of preceding query
- G06F16/3326—Reformulation based on results of preceding query using relevance feedback from the user, e.g. relevance feedback on documents, documents sets, document terms or passages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/953—Querying, e.g. by the use of web search engines
- G06F16/9535—Search customisation based on user profiles and personalisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Machine Translation (AREA)
- Electrically Operated Instructional Devices (AREA)
Abstract
The invention discloses a kind of method and apparatus feeding back answer, belong to field of computer technology.Described method includes: according to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, it is more than the training condition of problem and the semantic nearness of other corresponding answer with the semantic nearness of corresponding optimum answer based on problem, extraction of semantics parameter in default extraction of semantics formula is trained, obtains the trained values of extraction of semantics parameter;When receiving the answer request carrying target problem, according to each answer, described extraction of semantics formula and the trained values of described extraction of semantics parameter in described target problem, answer inquiry storehouse, determine the semantic nearness of described each answer and described target problem respectively;According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target answer, described answer request is fed back.Use the present invention, server can be improved and carry out the accuracy rate of answer feedback.
Description
Technical field
The present invention relates to field of computer technology, particularly to a kind of method and apparatus feeding back answer.
Background technology
Along with computer and the development of information retrieval technique, people increasingly tend to seek by computer
Asking the answer of certain problem, accordingly, the use of question answering system is more and more extensive.
The realization of existing community question answering system is usually: user inputs a problem, server by terminal
Obtain, from answer inquiry storehouse, all answers prestored, determine problem and certain answer therein that user inputs
Publicly-owned vocabulary, calculates the number of times sum that publicly-owned each vocabulary occurs in this answer, answers as this
Case and the text nearness of the problem of user's input, in this way, calculate each answer in answer inquiry storehouse
With the text nearness of the problem of user's input, the answer maximum with question text nearness is pushed to user.
During realizing the present invention, inventor finds that prior art at least there is problems in that
Implementation method based on above-mentioned community's question answering system, server is when pushing answer to user, mainly
It is the text nearness coming between computational problem and answer based on the terminology match degree between problem and answer,
But common vocabulary may be there is not with the problem of user's input and (i.e. there is vocabulary in the answer required for user
Wide gap), or the number of times that common vocabulary occurs is less, so, is pushed to answer and the user's request of user
The probability of coupling is relatively low, thus, cause server to carry out the accuracy rate of answer feedback relatively low.
Summary of the invention
In order to solve problem of the prior art, embodiments provide a kind of method feeding back answer and dress
Put.Described technical scheme is as follows:
First aspect, it is provided that a kind of method feeding back answer, described method includes:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described semanteme
The trained values of extracting parameter;
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described extraction of semantics formula and the trained values of described extraction of semantics parameter, determine respectively
Described each answer and the semantic nearness of described target problem;
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target
Answer, feeds back described answer request.
Second aspect, it is provided that a kind of device feeding back answer, described device includes:
Training module, for according to problem, optimum answer and other answer right of storage in training sample database
Should be related to, based on problem with the semantic nearness of corresponding optimum answer more than problem and other corresponding answer
The training condition of semantic nearness, the extraction of semantics parameter in default extraction of semantics formula is trained,
Obtain the trained values of described extraction of semantics parameter;
Determine module, for when receiving the answer request carrying target problem, asking according to described target
Each answer, described extraction of semantics formula and the instruction of described extraction of semantics parameter in topic, answer inquiry storehouse
Practice value, determine the semantic nearness of described each answer and described target problem respectively;
Feedback module, for according to the semantic nearness of described each answer Yu described target problem, described respectively
Answer is chosen target answer, described answer request is fed back.
The technical scheme that the embodiment of the present invention provides has the benefit that
In the embodiment of the present invention, according to problem, optimum answer and other answer of storage in training sample database
Corresponding relation, answers more than problem and corresponding other based on problem and the semantic nearness of corresponding optimum answer
The training condition of the semantic nearness of case, instructs the extraction of semantics parameter in default extraction of semantics formula
Practice, obtain the trained values of described extraction of semantics parameter, when receiving the answer request carrying target problem,
According to each answer, described extraction of semantics formula and institute's predicate in described target problem, answer inquiry storehouse
The trained values of justice extracting parameter, determines the semantic nearness of described each answer and described target problem, root respectively
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target answer,
Described answer request is fed back.So, carry out answer based on semantic nearness choosing, it is to avoid problem
The vocabulary Gap existed with answer, it is thus possible to improve the accuracy of the answer for problem feedback.
Accompanying drawing explanation
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, institute in embodiment being described below
The accompanying drawing used is needed to be briefly described, it should be apparent that, the accompanying drawing in describing below is only the present invention
Some embodiments, for those of ordinary skill in the art, on the premise of not paying creative work,
Other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of method flow diagram feeding back answer that the embodiment of the present invention provides;
Fig. 2 is a kind of schematic diagram training process that the embodiment of the present invention provides;
Fig. 3 is a kind of apparatus structure schematic diagram feeding back answer that the embodiment of the present invention provides;
Fig. 4 is the structural representation of a kind of server that the embodiment of the present invention provides.
Detailed description of the invention
For making the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to the present invention
Embodiment is described in further detail.
Embodiment one
Embodiments provide a kind of method feeding back answer, as it is shown in figure 1, the process stream of the method
Journey can comprise the following steps that
Step 101, according to the corresponding relation of problem, optimum answer and other answer of storage in training sample database,
Connect more than the semanteme of problem with other corresponding answer with the semantic nearness of corresponding optimum answer based on problem
The training condition of recency, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains language
The trained values of justice extracting parameter.
Step 102, when receiving the answer request carrying target problem, looks into according to target problem, answer
Ask each answer, extraction of semantics formula and the trained values of extraction of semantics parameter in storehouse, determine respectively and respectively answer
Case and the semantic nearness of target problem.
Step 103, according to the semantic nearness of each answer Yu target problem, chooses target answer in each answer,
Answer request is fed back.
In the embodiment of the present invention, according to problem, optimum answer and other answer of storage in training sample database
Corresponding relation, answers more than problem and corresponding other based on problem and the semantic nearness of corresponding optimum answer
The training condition of the semantic nearness of case, instructs the extraction of semantics parameter in default extraction of semantics formula
Practice, obtain the trained values of described extraction of semantics parameter, when receiving the answer request carrying target problem,
According to each answer, described extraction of semantics formula and institute's predicate in described target problem, answer inquiry storehouse
The trained values of justice extracting parameter, determines the semantic nearness of described each answer and described target problem, root respectively
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target answer,
Described answer request is fed back.So, carry out answer based on semantic nearness choosing, it is to avoid problem
The vocabulary Gap existed with answer, it is thus possible to improve the accuracy of the answer for problem feedback.
Embodiment two
Embodiments providing a kind of method feeding back answer, the executive agent of the method can be service
Device, this server can be question and answer website, community or the server of application, can be provided with place in this server
Reason device, memorizer, transceiver, processor may be used for the training to extraction of semantics parameter and anti-for problem
The process of feedback answer, memorizer may be used for storing the number of data and the generation needed in following processing procedure
According to, transceiver may be used for receiving and sending data,.Below in conjunction with detailed description of the invention, to shown in Fig. 1
Handling process be described in detail, content can be such that
Step 101, according to the corresponding relation of problem, optimum answer and other answer of storage in training sample database,
Connect more than the semanteme of problem with other corresponding answer with the semantic nearness of corresponding optimum answer based on problem
The training condition of recency, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains language
The trained values of justice extracting parameter.
Wherein, when big data process, the semanteme of statement (such as problem, answer etc.) can be quantified,
Extraction of semantics formula could be for the semantic formula of extraction problem or answer.Extraction of semantics parameter is permissible
It is the constant coefficient in extraction of semantics formula, can be determined by training process.Semantic nearness can be to ask
Topic and answer are in the degree of closeness of semantic (i.e. the expression meaning of statement) aspect.
In force, server can obtain the answer of some problems and correspondence thereof from the Internet, is deposited
Storage in training sample database, such as, can obtain answering of some problems and correspondence in some community's question answering system
Case, wherein, for each problem in training sample database to there being a number of answer, including
Corresponding to the optimum answer (can be typically by the answer selected of user proposing problem) of each problem and its
His answer.For each word present in dictionary to there being term vector (being properly termed as distribution vector), wherein,
Term vector can be the vector (d can be 50) of d dimension, and certain one-dimensional numerical value therein can be used to represent and is somebody's turn to do
The value of the corresponding a certain semantic item of word, such as, the term vector of BMW one word can be [0.5;0.8;...],
Wherein, the semantic item that the first dimension of term vector is corresponding can be " this word is for representing the probability of animal ", 0.5
Represent that the numerical value of this probability, the semantic item of the second dimension correspondence can be that " this word is for representing the possibility of vehicle
Property ", 0.8 represents the numerical value of this probability.Server can obtain the word of each word in problem and answer to
Amount.For each problem in training sample database and the answer of correspondence thereof, server can obtain its correspondence
The matrix (can be described as word matrix) comprising term vector, the word of each word in every string equivalent storehouse of word matrix
Vector.Such as, having V word, the dimension of word matrix in current dictionary can be d × V, and server gets
In training sample database after certain problem or answer, the word of the word occurred in the problem that obtains or answer can be obtained
Vector, and place it in the corresponding position in word matrix, other position of word matrix could be arranged to zero (i.e.
Problem or answer do not have the word occurred is corresponding is classified as zero in word matrix), so, each different problem
Or answer has the word matrix of its correspondence.
Word matrix-vector corresponding to each problem in training sample database or answer can be obtained by server
Characterization problems or the vector of answer, can use ExRepresenting, subscript x can be that problem (can represent with q and ask
Topic) or answer (answer can be represented with a), i.e. EqAfter the word matrix-vector that problem of representation is corresponding to
Amount, EaRepresent the vector after the word matrix-vector that answer is corresponding.It is every that server obtains in training sample database
Individual problem and the E of answerq、EaAfter, it is carried out a certain proportion of damage, i.e. can be by some of which value
Pressure is set to zero (can be random choose some values wherein), obtain E~x, and then can utilize following
The semanteme that extraction of semantics formulas Extraction problem or answer are characterized:
Wherein, z can be described as semantic vector, can be with characterization problems or the semanteme of answer, and W is properly termed as weighting square
Battle array, for rightExtraction of semantics and dimensionality reduction, b is properly termed as bias vector, is provided commonly for extraction with W and asks
Topic or the semanteme that characterized of answer, W, b can be described as extraction of semantics parameter, and f () is nonlinear function, uses
In the semanteme that extraction problem or answer are characterized, S function, hyperbolic function or rectification letter can be chosen for
Number etc., f () is as a example by rectification function herein, i.e. f (), W and b jointly act on the problem of extraction or
The semanteme that answer is characterized.
For each problem in training sample database and the answer of correspondence thereof, after server obtains vector z, can
With the semantic nearness between computational problem and answer, problem of representation can be carried out with the cosine angle between vector
And the semantic nearness between answer, it is more than problem according to problem with the semantic nearness of corresponding optimum answer
With the condition of the semantic nearness of other corresponding answer, the weighting matrix in training formula (1) and biasing square
Battle array, obtains final trained values.
Optionally, problem can be deducted with right by increase problem with the semantic nearness of corresponding optimum answer
The mode of the difference summation of the semantic nearness of other answer answered, the semanteme obtained in above-mentioned formula (1) carries
Taking parameter, accordingly, the processing procedure of step 101 can be such that according to asking of storing in training sample database
Topic, optimum answer and the corresponding relation of other answer, based on the semanteme increasing problem and corresponding optimum answer
Nearness deducts the training condition of problem and the difference summation of the semantic nearness of other corresponding answer, in advance
If extraction of semantics formula in extraction of semantics parameter be trained, obtain the training of described extraction of semantics parameter
Value.
In force, obtain a problem in sample training storehouse and the optimum answer of correspondence thereof and other answers,
Wherein, the optimum answer that problem is corresponding can be designated as a+, other answers corresponding to problem can be designated as aj —, j
Other answers of jth that problem of representation is corresponding, can be 1 in the sum of other answers corresponding to this problem
Any integer, such as, other answers outside the optimum answer that problem is corresponding have N number of, j=1, and 2 ... N.
A problem in the sample training storehouse that will obtain and the optimum answer of correspondence and other answers thereof are as training number
According to, set up object function, and the object function set up is trained, obtain the training of extraction of semantics parameter
Value.
With the optimum answer of a problem in sample training storehouse and correspondence thereof and other answers as training data
Training process is as follows: problem and answer according to formula (1) carry out extraction of semantics obtain semanteme corresponding to problem to
Amount, can be designated as z respectivelyqAnd za, after the semantic vector of all answers obtaining problem and its correspondence, can
To calculate the semantic nearness between answer and problem according to formula (2),
Wherein, sim (q, a) the anticipated nearness of all answers of problem of representation and correspondence thereof, the formula used herein (2)
It is that the cosine angle between employing problem and semantic vector corresponding to answer carrys out all of problem of representation and correspondence thereof
The semantic nearness of answer.Server can set up loss function according to formula (3),
Wherein, (q, a) problem of representation deducts problem and answers with corresponding other L with the semantic nearness of corresponding optimum answer
The difference summation of the semantic nearness of case, sim (q, a+) problem of representation is close with the semanteme of corresponding optimum answer
Degree,The semantic nearness of problem of representation and corresponding other answer each, formula (3) i.e. enters
The first object function of row training.The extraction of semantics parameter comprised in formula (3) is arranged initial value, utilizes
Gradient descent method, trains first object function, obtains extraction of semantics parameter W, the b comprised in formula (3)
Trained values, now, using the answer of a problem in sample training storehouse and correspondence thereof as training data,
The training process of the object function set up based on this training data is terminated, after terminating, has obtained extraction of semantics ginseng
The trained values of number W, b.
Server obtains the next problem in sample training storehouse and the optimum answer of correspondence thereof and other answer,
As training data, and set up first object function according to above-mentioned training process, and utilize BP algorithm
(Back Propagation, back propagation) by extraction of semantics parameter W obtained above, the trained values of b,
As the initial value of this first object function of training, train first object function, obtain this extraction of semantics
Parameter W, the trained values of b, as the initial value trained next time, recursion successively, until sample is instructed
Practice all problems in storehouse and the optimum answer of correspondence thereof and complete, whole training process is trained in other answer
Terminate, and obtain final extraction of semantics parameter W, the trained values of b, and stored.
Additionally, for reducing the complexity during gradient removing object function, during above-mentioned training, mesh
Scalar functions can also use the formula shown in formula (4),
Wherein, the physical meaning that formula (4) represents is approximately the same with formula (3), and the foundation of training is all to make to ask
Inscribe the semantic nearness more than problem with other corresponding answers of the semantic nearness with corresponding optimum answer,
Optionally, it is also possible to determined the initial value of above-mentioned training process by another training process, accordingly,
Training process can be such that step one, according in training sample database storage each problem and each answer, based on
Before reducing the statement and extraction of semantics obtained after statement sequence is performed the inversely processing of extraction of semantics and extraction of semantics
The training condition of diversity factor of statement, the extraction of semantics parameter in default extraction of semantics formula is instructed
Practice, obtain the middle trained value of described extraction of semantics parameter;Wherein, described statement is problem or answer;Step
Rapid two, according to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on
Problem is more than the semantic nearness of problem and other corresponding answer with the semantic nearness of corresponding optimum answer
Training condition, using the middle trained value of described extraction of semantics parameter as initial input value, to described semanteme
The extraction of semantics parameter extracted in formula is trained, and obtains the trained values of described extraction of semantics parameter.
Wherein, order performs extraction of semantics can be to carry out the semanteme of statement according to the formula shown in formula (1)
Extracting, be known as the cataloged procedure to statement, the inversely processing of extraction of semantics can be to formula (1)
Inversely processing, the E before can obtaining and encodingxThere is the E ' of same dimensionx, this process can referred to as decode
Journey, it is possible to use denoising automatic coding machine realizes cataloged procedure and the whole process of decoding process, can will go
Automatic coding machine of making an uproar regards a kind of special neutral net as.
In force, as in figure 2 it is shown, the processing procedure of step one is as follows: obtain in sample training storehouse is each
Some problem in each answer of problem or answer, obtain the E of its correspondencex, afterwards it is carried out certain proportion
Damage, can by some of which value force be set to zero, obtainAccording to formula (1), it is carried out
Extraction of semantics, obtains the semantic vector z of each problem and each answer, wherein z comprise extraction of semantics parameter W,
B is the most rightObtaining corresponding semantic vector z after encoding, z is carried out inverse by server by utilizing g (z) afterwards
Conversion, i.e. carries out inverse transformation to the semantic vector z obtained after coding and obtains g (f ()), and this process is decoding process.
Based on reducing the E ' obtained after decodingxWith the E before codingxThe training condition of diversity factor, set up equation below
(5) it is the second object function:
L(g(f()),Ex)=| | g (f ())-Ex||2……(5)
Wherein, formula (5) represents the extraction of semantics parameter pair utilizing selectionCoding, then decode it,
The E ' arrivedxWith the E before damagexThe mould of difference vector, the value of formula (5) is the least, the extraction of semantics obtained
Parameter more can accurately express the semanteme of statement, and the extraction of semantics parameter comprised in formula (5) is arranged initial value,
Utilize gradient descent method to train the second object function, obtain the trained values of extraction of semantics parameter.
Server obtains the other problem in sample training storehouse or answer, as training data, and according to
Above-mentioned training process sets up the second object function, and utilizes BP algorithm by extraction of semantics parameter obtained above
W, b trained values, as the initial value of this second object function of training, trains the second object function, passs successively
Pushing away, until all problems and answer training in sample training storehouse are complete, whole training process terminates, and
Obtain final extraction of semantics parameter W, the trained values of b.
The trained values of the extraction of semantics parameter that step one is obtained as the middle trained value of whole training process,
As the initial value of the training process of step 2, obtain final semanteme according to step 2 continuation training and carry
Take parameter W, b, and stored, wherein the training process of step 2 can be in above-mentioned steps 101 based on
Increase problem deducts the semanteme of problem and other corresponding answer and connects with the semantic nearness of corresponding optimum answer
The training process that the training condition of the difference sum of recency is trained, corresponding processing mode may refer to step
Specific descriptions in 101, are not repeated herein.
Optionally, during training extraction of semantics parameter, the problem in sample training storehouse and answer can be entered respectively
Row training, obtains problem and answer respective extraction of semantics parameter, accordingly, the process of above-mentioned steps one
Journey can be such that according to each problem of storage in training sample database, based on reducing, question order is performed semanteme
The training bar of the diversity factor of the statement before the statement extracted and obtain after the inversely processing of extraction of semantics and extraction of semantics
Part, is trained the problem extraction of semantics parameter in default problem extraction of semantics formula, obtain described in ask
The middle trained value of topic extraction of semantics parameter;According to each answer of storage in training sample database, right based on reducing
Statement before the statement obtained after the inversely processing of statement sequence execution extraction of semantics and extraction of semantics and extraction of semantics
The training condition of diversity factor, the answer extraction of semantics parameter in default answer extraction of semantics formula is carried out
Training, obtains the middle trained value of described answer extraction of semantics parameter;The handling process of above-mentioned steps two is permissible
As follows: according to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on
Problem is more than the semantic nearness of problem and other corresponding answer with the semantic nearness of corresponding optimum answer
Training condition, by middle trained value and the described answer extraction of semantics parameter of described problem extraction of semantics parameter
Middle trained value as initial input value, the problem extraction of semantics in described problem extraction of semantics formula is joined
Answer extraction of semantics parameter in number and described answer extraction of semantics formula is trained, and obtains described problem language
The trained values of justice extracting parameter and the trained values of described answer extraction of semantics parameter.
In force, during the training of above-mentioned steps one, problem or answer in sample training storehouse are permissible
Use different W and b, carry out the problem in extraction of semantics, i.e. sample training storehouse according to formula (1) respectively
A pair W and b can be used (W can be designated as1、b1) carry out extraction of semantics, the problem in sample training storehouse
Corresponding answer can use another that (W and b can be designated as W2、b2) carry out extraction of semantics, and according to
Mode described in step one is set up object function respectively and trains, and obtains W1、b1And W2、b2Trained values,
As the middle trained value of whole training process, and as the initial value of step 2, according to step 2 institute
The handling process stated continues training and obtains final extraction of semantics parameter W1、b1And W2、b2, and deposited
Storage, wherein, during the training of step 2, the optimum answer of computational problem and correspondence and other answer
Semantic vector, and according to the semantic vector obtained according to formula (2) computing semantic similarity time, semantic phase
W is comprised in degree1、b1And W2、b2Four extraction of semantics parameters, corresponding processing mode may refer to
Step one, specific descriptions in two, be not repeated herein.
Step 102, when receiving the answer request carrying target problem, looks into according to target problem, answer
Ask each answer, extraction of semantics formula and the trained values of extraction of semantics parameter in storehouse, determine respectively and respectively answer
Case and the semantic nearness of target problem.
Wherein, target problem can be the problem wanting to know answer that user is inputted by terminal, and answer is looked into
Ask storehouse can be above-mentioned sample training storehouse, it is also possible to be storage server obtain from the Internet some answer
The storehouse of case, therefrom chooses the answer of coupling target problem for server.
In force, user inputs target problem and after server sends answer request by terminal, service
Device can receive the answer request that user sends, and then it is resolved by server, and acquisition is wherein carried
Target problem, the trained values of the extraction of semantics parameter stored by server substitutes in formula (1), can be according to
Formula (1) calculates the semantic vector of each answer in target problem and answer inquiry storehouse, obtains target problem
And after the semantic vector of each answer in answer inquiry storehouse, answer inquiry storehouse can be calculated according to formula (2)
In each answer respectively with the semantic nearness of target problem.
Optionally, situation about being trained respectively for the problems referred to above and answer;Accordingly, connect when server
Processing procedure when receiving the answer request of terminal transmission can be such that to work as to receive and carries target problem
During answer request, according to each answer in described target problem, answer inquiry storehouse, described problem extraction of semantics
Formula, described answer extraction of semantics formula and the trained values of described problem extraction of semantics parameter and described answer
The trained values of case semanteme extracting parameter, determines the semantic nearness of described each answer and described target problem respectively.
In force, after server obtains problem and answer extraction of semantics parameter respectively, receive and carry
When the answer of target problem is asked, the most corresponding extraction of semantics parameter can be distinguished according to problem with answer, according to
Formula (1) calculates the semantic vector of each answer in target problem and answer inquiry storehouse respectively, it is determined that each
From semantic vector after, each answer and target problem in answer inquiry storehouse can be calculated according to formula (2)
Semantic similarity.
Step 103, according to the semantic nearness of each answer Yu target problem, chooses target answer in each answer,
Answer request is fed back.
Wherein, target answer can be answer inquiry storehouse in each answer matches in the answer of target problem, can
To be one of them answer, it is also possible to be several answer therein.
In force, each answer during server obtains answer inquiry storehouse is close with the semanteme of target problem respectively
After degree, according to order from big to small, the semantic nearness obtained can be ranked up, can be by maximum
Answer corresponding to semantic nearness is chosen for target answer, or will after sequence before several semantic nearness pair
The answer answered is chosen for target answer, after choosing target answer, target answer is given by terminal feedback and uses
Family.
Optionally, it is also possible to the semantic nearness obtained is combined with some features based on terminology match,
Accordingly, the handling process of step 103 can be such that the language according to described each answer Yu described target problem
Justice nearness, and the text nearness of described each answer and described target problem, select in described each answer
Take target answer, described answer request is fed back.
Wherein, text nearness can be each answer and target problem nearness based on terminology match.
In force, after server obtains the semantic nearness of each answer in answer inquiry storehouse and target problem,
Stored, and calculated each answer in answer inquiry storehouse and mesh according to the formula shown in formula (6)-(16)
Mark problem text based on terminology match nearness,
Wherein, c (qi, a) can be qiThe number of times occurred in a, df (qi) can be qiRespectively answering in answer inquiry storehouse
The number of times occurred in case, | a | can be the number of the word comprised in answer a, and | C | can be in answer inquiry storehouse
The number of word that comprises of each answer, C can be each answer in answer inquiry storehouse, k1∈ [1.2,2.0], b=0.75,
Avg | C | can be the meansigma methods of the number of word that each answer in answer inquiry storehouse comprises, obtain each answer with
After target problem text similarity, it is total to the semantic similarity of the above-mentioned each answer determined with target problem
With putting in study sequence framework, e.g. SVM sort algorithm, obtain each answer and mesh in answer inquiry storehouse
Shown in mark problem integrated ordered, i.e. comprehensive utilization semantic similarity feature and above-mentioned 11 formula based on word
Converge the text similarity feature of coupling and the similarity of each answer of obtaining and target problem, wherein these 12 spies
The weight levied can carry out artificial assignment based on experience value, it is also possible to utilize sample in sample training storehouse according to
SVM sort algorithm is trained obtaining the weight that each feature is corresponding, by answer corresponding for maximum similarity
By terminal feedback to user, it is also possible to the answer that before in sorting, several similarities are corresponding is anti-by terminal
Feed user.
In the embodiment of the present invention, according to problem, optimum answer and other answer of storage in training sample database
Corresponding relation, answers more than problem and corresponding other based on problem and the semantic nearness of corresponding optimum answer
The training condition of the semantic nearness of case, instructs the extraction of semantics parameter in default extraction of semantics formula
Practice, obtain the trained values of described extraction of semantics parameter, when receiving the answer request carrying target problem,
According to each answer, described extraction of semantics formula and institute's predicate in described target problem, answer inquiry storehouse
The trained values of justice extracting parameter, determines the semantic nearness of described each answer and described target problem, root respectively
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target answer,
Described answer request is fed back.So, carry out answer based on semantic nearness choosing, it is to avoid problem
The vocabulary Gap existed with answer, it is thus possible to improve the accuracy of the answer for problem feedback.
Embodiment three
Based on identical technology design, the embodiment of the present invention additionally provides a kind of device feeding back answer, such as Fig. 3
Shown in, this device includes:
Training module 310, for according to problem, optimum answer and other answer of storage in training sample database
Corresponding relation, answers more than problem and corresponding other based on problem and the semantic nearness of corresponding optimum answer
The training condition of the semantic nearness of case, instructs the extraction of semantics parameter in default extraction of semantics formula
Practice, obtain the trained values of described extraction of semantics parameter;
Determine module 320, for when receiving the answer request carrying target problem, according to described target
Each answer, described extraction of semantics formula and described extraction of semantics parameter in problem, answer inquiry storehouse
Trained values, determines the semantic nearness of described each answer and described target problem respectively;
Feedback module 330, for the semantic nearness according to described each answer Yu described target problem, described
Each answer is chosen target answer, described answer request is fed back.
Optionally, described training module 310, it is used for:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on increasing
It is close with the semanteme of other corresponding answer that big problem deducts problem with the semantic nearness of corresponding optimum answer
The training condition of the difference summation of degree, is trained the extraction of semantics parameter in default extraction of semantics formula,
Obtain the trained values of described extraction of semantics parameter.
Optionally, described training module 310, it is used for:
According to each problem stored in training sample database and each answer, based on reducing, statement sequence is performed semanteme
The training bar of the diversity factor of the statement before the statement extracted and obtain after the inversely processing of extraction of semantics and extraction of semantics
Part, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described extraction of semantics ginseng
The middle trained value of number;Wherein, described statement is problem or answer;
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, using the middle trained value of described extraction of semantics parameter as initial input value, carries described semanteme
The extraction of semantics parameter taken in formula is trained, and obtains the trained values of described extraction of semantics parameter.
Optionally, described training module 310, it is used for:
According to each problem of storage in training sample database, based on reducing, question order is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If problem extraction of semantics formula in problem extraction of semantics parameter be trained, obtain described problem semanteme and carry
Take the middle trained value of parameter;
According to each answer of storage in training sample database, based on reducing, statement sequence is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If answer extraction of semantics formula in answer extraction of semantics parameter be trained, obtain described answer semanteme and carry
Take the middle trained value of parameter;
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, by the middle trained value of described problem extraction of semantics parameter and described answer extraction of semantics parameter
Middle trained value is as initial input value, to the problem extraction of semantics parameter in described problem extraction of semantics formula
It is trained with the answer extraction of semantics parameter in described answer extraction of semantics formula, obtains described problem semantic
The trained values of extracting parameter and the trained values of described answer extraction of semantics parameter;
Described determine module 320, be used for:
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described problem extraction of semantics formula, described answer extraction of semantics formula and described problem
The trained values of extraction of semantics parameter and the trained values of described answer extraction of semantics parameter, determine respectively and described respectively answer
Case and the semantic nearness of described target problem.
Optionally, described feedback module 330, it is used for:
According to the semantic nearness of described each answer Yu described target problem, and described each answer and described mesh
The text nearness of mark problem, chooses target answer in described each answer, carries out described answer request instead
Feedback.
In the embodiment of the present invention, according to problem, optimum answer and other answer of storage in training sample database
Corresponding relation, answers more than problem and corresponding other based on problem and the semantic nearness of corresponding optimum answer
The training condition of the semantic nearness of case, instructs the extraction of semantics parameter in default extraction of semantics formula
Practice, obtain the trained values of described extraction of semantics parameter, when receiving the answer request carrying target problem,
According to each answer, described extraction of semantics formula and institute's predicate in described target problem, answer inquiry storehouse
The trained values of justice extracting parameter, determines the semantic nearness of described each answer and described target problem, root respectively
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target answer,
Described answer request is fed back.So, carry out answer based on semantic nearness choosing, it is to avoid problem
The vocabulary Gap existed with answer, it is thus possible to improve the accuracy of the answer for problem feedback.
It should be understood that above-described embodiment provide feedback answer device feed back answer time, only more than
The division stating each functional module is illustrated, in actual application, and can be as desired by above-mentioned functions
Distribution is completed by different functional modules, the internal structure of equipment will be divided into different functional modules, with
Complete all or part of function described above.It addition, the device feeding back answer that above-described embodiment provides
Belonging to same design with the embodiment of the method for feedback answer, it implements process and refers to embodiment of the method, this
In repeat no more.
Embodiment four
Fig. 4 is the structural representation of the server that the embodiment of the present invention provides.This server 1900 can be because of configuration
Or performance is different and produce bigger difference, one or more central processing units (central can be included
Processing units, CPU) 1922 (such as, one or more processors) and memorizeies 1932,
The storage medium 1930 of one or more storage application programs 1942 or data 1944 (such as one or
More than one mass memory unit).Wherein, memorizer 1932 and storage medium 1930 can be of short duration storages
Or persistently store.The program being stored in storage medium 1930 can include one or more modules (diagram
Do not mark), each module can include a series of command operatings in statistical server.Further,
Central processing unit 1922 could be arranged to communicate with storage medium 1930, performs on statistical server 1900
A series of command operatings in storage medium 1930.
Server 1900 can also include one or more power supplys 1926, one or more wired or
Radio network interface 1950, one or more input/output interfaces 1958, one or more keyboards
1956, and/or, one or more operating systems 1941, such as Windows ServerTM, Mac OS
XTM, UnixTM, LinuxTM, FreeBSDTM etc..
Server 1900 can include memorizer, and one or more than one program, one of them
Or more than one program is stored in memorizer, and it is configured to be held by one or more than one processor
Row one or more than one program comprise the instruction for carrying out following operation:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described semanteme
The trained values of extracting parameter;
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described extraction of semantics formula and the trained values of described extraction of semantics parameter, determine respectively
Described each answer and the semantic nearness of described target problem;
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target
Answer, feeds back described answer request.
Optionally, described according to the correspondence of problem, optimum answer and other answer of storage in training sample database
Relation, is more than problem and other corresponding answer based on problem with the semantic nearness of corresponding optimum answer
The training condition of semantic nearness, is trained the extraction of semantics parameter in default extraction of semantics formula,
Obtain the trained values of described extraction of semantics parameter, including:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on increasing
It is close with the semanteme of other corresponding answer that big problem deducts problem with the semantic nearness of corresponding optimum answer
The training condition of the difference summation of degree, is trained the extraction of semantics parameter in default extraction of semantics formula,
Obtain the trained values of described extraction of semantics parameter.
Optionally, described according to the correspondence of problem, optimum answer and other answer of storage in training sample database
Relation, is more than problem and other corresponding answer based on problem with the semantic nearness of corresponding optimum answer
The training condition of semantic nearness, is trained the extraction of semantics parameter in default extraction of semantics formula,
Obtain the trained values of described extraction of semantics parameter, including:
According to each problem stored in training sample database and each answer, based on reducing, statement sequence is performed semanteme
The training bar of the diversity factor of the statement before the statement extracted and obtain after the inversely processing of extraction of semantics and extraction of semantics
Part, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described extraction of semantics ginseng
The middle trained value of number;Wherein, described statement is problem or answer;
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, using the middle trained value of described extraction of semantics parameter as initial input value, carries described semanteme
The extraction of semantics parameter taken in formula is trained, and obtains the trained values of described extraction of semantics parameter.
Optionally, described according to each problem stored in training sample database and each answer, based on reducing statement
The difference of the statement before the statement obtained after the inversely processing of order execution extraction of semantics and extraction of semantics and extraction of semantics
The training condition of different degree, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains institute
The middle trained value of predicate justice extracting parameter, including:
According to each problem of storage in training sample database, based on reducing, question order is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If problem extraction of semantics formula in problem extraction of semantics parameter be trained, obtain described problem semanteme and carry
Take the middle trained value of parameter;
According to each answer of storage in training sample database, based on reducing, statement sequence is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If answer extraction of semantics formula in answer extraction of semantics parameter be trained, obtain described answer semanteme and carry
Take the middle trained value of parameter;
Described according to the corresponding relation of problem, optimum answer and other answer of storage, base in training sample database
Close with the semanteme of other corresponding answer more than problem with the semantic nearness of corresponding optimum answer in problem
The training condition of degree, using the middle trained value of described extraction of semantics parameter as initial input value, to institute's predicate
The extraction of semantics parameter that justice is extracted in formula is trained, and obtains the trained values of described extraction of semantics parameter, bag
Include:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, by the middle trained value of described problem extraction of semantics parameter and described answer extraction of semantics parameter
Middle trained value is as initial input value, to the problem extraction of semantics parameter in described problem extraction of semantics formula
It is trained with the answer extraction of semantics parameter in described answer extraction of semantics formula, obtains described problem semantic
The trained values of extracting parameter and the trained values of described answer extraction of semantics parameter;
Described when receiving the answer request carrying target problem, look into according to described target problem, answer
Ask each answer, described extraction of semantics formula and the trained values of described extraction of semantics parameter in storehouse, respectively
Determine the semantic nearness of described each answer and described target problem, including:
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described problem extraction of semantics formula, described answer extraction of semantics formula and described problem
The trained values of extraction of semantics parameter and the trained values of described answer extraction of semantics parameter, determine respectively and described respectively answer
Case and the semantic nearness of described target problem.
Optionally, the described semantic nearness according to described each answer Yu described target problem, respectively answer described
Case is chosen target answer, described answer request is fed back, including:
According to the semantic nearness of described each answer Yu described target problem, and described each answer and described mesh
The text nearness of mark problem, chooses target answer in described each answer, carries out described answer request instead
Feedback.
In the embodiment of the present invention, according to problem, optimum answer and other answer of storage in training sample database
Corresponding relation, answers more than problem and corresponding other based on problem and the semantic nearness of corresponding optimum answer
The training condition of the semantic nearness of case, instructs the extraction of semantics parameter in default extraction of semantics formula
Practice, obtain the trained values of described extraction of semantics parameter, when receiving the answer request carrying target problem,
According to each answer, described extraction of semantics formula and institute's predicate in described target problem, answer inquiry storehouse
The trained values of justice extracting parameter, determines the semantic nearness of described each answer and described target problem, root respectively
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target answer,
Described answer request is fed back.So, carry out answer based on semantic nearness choosing, it is to avoid problem
The vocabulary Gap existed with answer, it is thus possible to improve the accuracy of the answer for problem feedback.
One of ordinary skill in the art will appreciate that all or part of step realizing above-described embodiment can be passed through
Hardware completes, it is also possible to instructing relevant hardware by program and complete, described program can be stored in
In a kind of computer-readable recording medium, storage medium mentioned above can be read only memory, disk or
CD etc..
The foregoing is only presently preferred embodiments of the present invention, not in order to limit the present invention, all the present invention's
Within spirit and principle, any modification, equivalent substitution and improvement etc. made, should be included in the present invention's
Within protection domain.
Claims (10)
1. the method feeding back answer, it is characterised in that described method includes:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described semanteme
The trained values of extracting parameter;
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described extraction of semantics formula and the trained values of described extraction of semantics parameter, determine respectively
Described each answer and the semantic nearness of described target problem;
According to the semantic nearness of described each answer Yu described target problem, in described each answer, choose target
Answer, feeds back described answer request.
Method the most according to claim 1, it is characterised in that described according to storage in training sample database
The corresponding relation of problem, optimum answer and other answer, semanteme based on problem with corresponding optimum answer
Nearness is more than problem and the training condition of the semantic nearness of other corresponding answer, carries default semanteme
The extraction of semantics parameter taken in formula is trained, and obtains the trained values of described extraction of semantics parameter, including:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on increasing
It is close with the semanteme of other corresponding answer that big problem deducts problem with the semantic nearness of corresponding optimum answer
The training condition of the difference summation of degree, is trained the extraction of semantics parameter in default extraction of semantics formula,
Obtain the trained values of described extraction of semantics parameter.
Method the most according to claim 1, it is characterised in that described according to storage in training sample database
The corresponding relation of problem, optimum answer and other answer, semanteme based on problem with corresponding optimum answer
Nearness is more than problem and the training condition of the semantic nearness of other corresponding answer, carries default semanteme
The extraction of semantics parameter taken in formula is trained, and obtains the trained values of described extraction of semantics parameter, including:
According to each problem stored in training sample database and each answer, based on reducing, statement sequence is performed semanteme
The training bar of the diversity factor of the statement before the statement extracted and obtain after the inversely processing of extraction of semantics and extraction of semantics
Part, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described extraction of semantics ginseng
The middle trained value of number;Wherein, described statement is problem or answer;
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, using the middle trained value of described extraction of semantics parameter as initial input value, carries described semanteme
The extraction of semantics parameter taken in formula is trained, and obtains the trained values of described extraction of semantics parameter.
Method the most according to claim 3, it is characterised in that described according to storage in training sample database
Each problem and each answer, after reducing the inversely processing that statement sequence is performed extraction of semantics and extraction of semantics
The training condition of the diversity factor of the statement before the statement obtained and extraction of semantics, to default extraction of semantics formula
In extraction of semantics parameter be trained, obtain the middle trained value of described extraction of semantics parameter, including:
According to each problem of storage in training sample database, based on reducing, question order is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If problem extraction of semantics formula in problem extraction of semantics parameter be trained, obtain described problem semanteme and carry
Take the middle trained value of parameter;
According to each answer of storage in training sample database, based on reducing, statement sequence is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If answer extraction of semantics formula in answer extraction of semantics parameter be trained, obtain described answer semanteme and carry
Take the middle trained value of parameter;
Described according to the corresponding relation of problem, optimum answer and other answer of storage, base in training sample database
Close with the semanteme of other corresponding answer more than problem with the semantic nearness of corresponding optimum answer in problem
The training condition of degree, using the middle trained value of described extraction of semantics parameter as initial input value, to institute's predicate
The extraction of semantics parameter that justice is extracted in formula is trained, and obtains the trained values of described extraction of semantics parameter, bag
Include:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, by the middle trained value of described problem extraction of semantics parameter and described answer extraction of semantics parameter
Middle trained value is as initial input value, to the problem extraction of semantics parameter in described problem extraction of semantics formula
It is trained with the answer extraction of semantics parameter in described answer extraction of semantics formula, obtains described problem semantic
The trained values of extracting parameter and the trained values of described answer extraction of semantics parameter;
Described when receiving the answer request carrying target problem, look into according to described target problem, answer
Ask each answer, described extraction of semantics formula and the trained values of described extraction of semantics parameter in storehouse, respectively
Determine the semantic nearness of described each answer and described target problem, including:
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described problem extraction of semantics formula, described answer extraction of semantics formula and described problem
The trained values of extraction of semantics parameter and the trained values of described answer extraction of semantics parameter, determine respectively and described respectively answer
Case and the semantic nearness of described target problem.
Method the most according to claim 1, it is characterised in that described according to described each answer with described
The semantic nearness of target problem, chooses target answer in described each answer, carries out described answer request
Feedback, including:
According to the semantic nearness of described each answer Yu described target problem, and described each answer and described mesh
The text nearness of mark problem, chooses target answer in described each answer, carries out described answer request instead
Feedback.
6. the device feeding back answer, it is characterised in that described device includes:
Training module, for according to problem, optimum answer and other answer right of storage in training sample database
Should be related to, based on problem with the semantic nearness of corresponding optimum answer more than problem and other corresponding answer
The training condition of semantic nearness, the extraction of semantics parameter in default extraction of semantics formula is trained,
Obtain the trained values of described extraction of semantics parameter;
Determine module, for when receiving the answer request carrying target problem, asking according to described target
Each answer, described extraction of semantics formula and the instruction of described extraction of semantics parameter in topic, answer inquiry storehouse
Practice value, determine the semantic nearness of described each answer and described target problem respectively;
Feedback module, for according to the semantic nearness of described each answer Yu described target problem, described respectively
Answer is chosen target answer, described answer request is fed back.
Device the most according to claim 6, it is characterised in that described training module, is used for:
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on increasing
It is close with the semanteme of other corresponding answer that big problem deducts problem with the semantic nearness of corresponding optimum answer
The training condition of the difference summation of degree, is trained the extraction of semantics parameter in default extraction of semantics formula,
Obtain the trained values of described extraction of semantics parameter.
Device the most according to claim 6, it is characterised in that described training module, is used for:
According to each problem stored in training sample database and each answer, based on reducing, statement sequence is performed semanteme
The training bar of the diversity factor of the statement before the statement extracted and obtain after the inversely processing of extraction of semantics and extraction of semantics
Part, is trained the extraction of semantics parameter in default extraction of semantics formula, obtains described extraction of semantics ginseng
The middle trained value of number;Wherein, described statement is problem or answer;
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, using the middle trained value of described extraction of semantics parameter as initial input value, carries described semanteme
The extraction of semantics parameter taken in formula is trained, and obtains the trained values of described extraction of semantics parameter.
Device the most according to claim 8, it is characterised in that described training module, is used for:
According to each problem of storage in training sample database, based on reducing, question order is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If problem extraction of semantics formula in problem extraction of semantics parameter be trained, obtain described problem semanteme and carry
Take the middle trained value of parameter;
According to each answer of storage in training sample database, based on reducing, statement sequence is performed extraction of semantics and language
The training condition of the diversity factor of the statement before the statement obtained after the inversely processing that justice is extracted and extraction of semantics, in advance
If answer extraction of semantics formula in answer extraction of semantics parameter be trained, obtain described answer semanteme and carry
Take the middle trained value of parameter;
According to the corresponding relation of problem, optimum answer and other answer of storage in training sample database, based on asking
Inscribe the semantic nearness with corresponding optimum answer and be more than the problem semantic nearness with other corresponding answer
Training condition, by the middle trained value of described problem extraction of semantics parameter and described answer extraction of semantics parameter
Middle trained value is as initial input value, to the problem extraction of semantics parameter in described problem extraction of semantics formula
It is trained with the answer extraction of semantics parameter in described answer extraction of semantics formula, obtains described problem semantic
The trained values of extracting parameter and the trained values of described answer extraction of semantics parameter;
Described determine module, be used for:
When receiving the answer request carrying target problem, according to described target problem, answer inquiry storehouse
In each answer, described problem extraction of semantics formula, described answer extraction of semantics formula and described problem
The trained values of extraction of semantics parameter and the trained values of described answer extraction of semantics parameter, determine respectively and described respectively answer
Case and the semantic nearness of described target problem.
Device the most according to claim 6, it is characterised in that described feedback module, is used for:
According to the semantic nearness of described each answer Yu described target problem, and described each answer and described mesh
The text nearness of mark problem, chooses target answer in described each answer, carries out described answer request instead
Feedback.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510316013.4A CN106294505B (en) | 2015-06-10 | 2015-06-10 | Answer feedback method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510316013.4A CN106294505B (en) | 2015-06-10 | 2015-06-10 | Answer feedback method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106294505A true CN106294505A (en) | 2017-01-04 |
CN106294505B CN106294505B (en) | 2020-07-07 |
Family
ID=57659324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510316013.4A Active CN106294505B (en) | 2015-06-10 | 2015-06-10 | Answer feedback method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106294505B (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107122421A (en) * | 2017-04-05 | 2017-09-01 | 北京大学 | Information retrieval method and device |
CN110059174A (en) * | 2019-04-28 | 2019-07-26 | 科大讯飞股份有限公司 | Inquiry guidance method and device |
CN110110048A (en) * | 2019-05-10 | 2019-08-09 | 科大讯飞股份有限公司 | Inquiry guidance method and device |
CN110457440A (en) * | 2019-08-09 | 2019-11-15 | 宝宝树(北京)信息技术有限公司 | A kind of method, apparatus, equipment and medium feeding back answer |
CN111126862A (en) * | 2019-12-26 | 2020-05-08 | 中国银行股份有限公司 | Data processing method and device and electronic equipment |
CN111712836A (en) * | 2018-02-09 | 2020-09-25 | 易享信息技术有限公司 | Multitask learning as question and answer |
CN113505205A (en) * | 2017-01-17 | 2021-10-15 | 华为技术有限公司 | System and method for man-machine conversation |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101566998A (en) * | 2009-05-26 | 2009-10-28 | 华中师范大学 | Chinese question-answering system based on neural network |
US20120078826A1 (en) * | 2010-09-29 | 2012-03-29 | International Business Machines Corporation | Fact checking using and aiding probabilistic question answering |
CN103425635A (en) * | 2012-05-15 | 2013-12-04 | 北京百度网讯科技有限公司 | Method and device for recommending answers |
CN104572617A (en) * | 2014-12-30 | 2015-04-29 | 苏州驰声信息科技有限公司 | Oral test answer deviation detection method and device |
CN104636456A (en) * | 2015-02-03 | 2015-05-20 | 大连理工大学 | Question routing method based on word vectors |
-
2015
- 2015-06-10 CN CN201510316013.4A patent/CN106294505B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101566998A (en) * | 2009-05-26 | 2009-10-28 | 华中师范大学 | Chinese question-answering system based on neural network |
US20120078826A1 (en) * | 2010-09-29 | 2012-03-29 | International Business Machines Corporation | Fact checking using and aiding probabilistic question answering |
CN103425635A (en) * | 2012-05-15 | 2013-12-04 | 北京百度网讯科技有限公司 | Method and device for recommending answers |
CN104572617A (en) * | 2014-12-30 | 2015-04-29 | 苏州驰声信息科技有限公司 | Oral test answer deviation detection method and device |
CN104636456A (en) * | 2015-02-03 | 2015-05-20 | 大连理工大学 | Question routing method based on word vectors |
Non-Patent Citations (2)
Title |
---|
叶星火等: "基于特征信息提取的中文自动文摘方法", 《计算机应用与软件》 * |
柯晓华等: "面向篇章语料库的自动知识获取——潜在语义分析(LSA)的研究和应用", 《2013 INTERNATIONAL CONFERENCE ON EDUCATION AND TEACHING》 * |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113505205A (en) * | 2017-01-17 | 2021-10-15 | 华为技术有限公司 | System and method for man-machine conversation |
US11308405B2 (en) | 2017-01-17 | 2022-04-19 | Huawei Technologies Co., Ltd. | Human-computer dialogue method and apparatus |
CN107122421A (en) * | 2017-04-05 | 2017-09-01 | 北京大学 | Information retrieval method and device |
CN111712836A (en) * | 2018-02-09 | 2020-09-25 | 易享信息技术有限公司 | Multitask learning as question and answer |
CN111712836B (en) * | 2018-02-09 | 2023-09-19 | 硕动力公司 | Multitasking learning as question and answer |
CN110059174A (en) * | 2019-04-28 | 2019-07-26 | 科大讯飞股份有限公司 | Inquiry guidance method and device |
CN110059174B (en) * | 2019-04-28 | 2023-05-30 | 科大讯飞股份有限公司 | Query guiding method and device |
CN110110048A (en) * | 2019-05-10 | 2019-08-09 | 科大讯飞股份有限公司 | Inquiry guidance method and device |
CN110457440A (en) * | 2019-08-09 | 2019-11-15 | 宝宝树(北京)信息技术有限公司 | A kind of method, apparatus, equipment and medium feeding back answer |
CN111126862A (en) * | 2019-12-26 | 2020-05-08 | 中国银行股份有限公司 | Data processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106294505B (en) | 2020-07-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106294505A (en) | A kind of method and apparatus feeding back answer | |
CN111143536B (en) | Information extraction method based on artificial intelligence, storage medium and related device | |
CN110516085A (en) | The mutual search method of image text based on two-way attention | |
CN110532368B (en) | Question answering method, electronic equipment and computer readable storage medium | |
CN110245709A (en) | Based on deep learning and from the 3D point cloud data semantic dividing method of attention | |
CN104598611B (en) | The method and system being ranked up to search entry | |
CN105069143B (en) | Extract the method and device of keyword in document | |
CN111259130B (en) | Method and apparatus for providing reply sentence in dialog | |
CN108763535A (en) | Information acquisition method and device | |
CN101566998A (en) | Chinese question-answering system based on neural network | |
CN106407280A (en) | Query target matching method and device | |
CN107025228B (en) | Question recommendation method and equipment | |
CN111090735B (en) | Performance evaluation method of intelligent question-answering method based on knowledge graph | |
CN111309887B (en) | Method and system for training text key content extraction model | |
CN105989849A (en) | Speech enhancement method, speech recognition method, clustering method and devices | |
CN104484380A (en) | Personalized search method and personalized search device | |
CN106599194A (en) | Label determining method and device | |
CN112115716A (en) | Service discovery method, system and equipment based on multi-dimensional word vector context matching | |
CN111368096A (en) | Knowledge graph-based information analysis method, device, equipment and storage medium | |
CN110347833B (en) | Classification method for multi-round conversations | |
CN110276064A (en) | A kind of part-of-speech tagging method and device | |
Dokun et al. | Single-document summarization using latent semantic analysis | |
CN106909647A (en) | A kind of data retrieval method and device | |
CN110489740A (en) | Semantic analytic method and Related product | |
CN111666770B (en) | Semantic matching method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |