CN107688608A - Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing - Google Patents

Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing Download PDF

Info

Publication number
CN107688608A
CN107688608A CN201710628166.1A CN201710628166A CN107688608A CN 107688608 A CN107688608 A CN 107688608A CN 201710628166 A CN201710628166 A CN 201710628166A CN 107688608 A CN107688608 A CN 107688608A
Authority
CN
China
Prior art keywords
sentence
matching degree
answer
word
answer statement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710628166.1A
Other languages
Chinese (zh)
Inventor
闫永刚
沈亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Midea Intelligent Technologies Co Ltd
Original Assignee
Hefei Midea Intelligent Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Midea Intelligent Technologies Co Ltd filed Critical Hefei Midea Intelligent Technologies Co Ltd
Priority to CN201710628166.1A priority Critical patent/CN107688608A/en
Publication of CN107688608A publication Critical patent/CN107688608A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/903Querying
    • G06F16/9032Query formulation
    • G06F16/90332Natural language query formulation or dialogue systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/30Semantic analysis

Abstract

The invention provides a kind of intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing, wherein, intelligent sound answering method, including:Answer statement is treated in acquisition;It is determined that treat at least one label of answer statement;In presetting database, determined to treat the matching set of answer statement according at least one label;Based on K arest neighbors disaggregated models, it is determined that k sentence in the matching set corresponding with treating answer statement;The matching degree for treating answer statement and each sentence in k sentence is calculated according to preset rules;According to the size of matching degree, k sentence is ranked up, and is sequentially output answer information corresponding to k sentence in sequence as the answer information for treating answer statement.By technical scheme, reduce the scope of answer information extraction, reduce the amount of calculation of answer extracting process, saved computing resource, moreover, improving the accuracy of answer information extraction, improve the intelligent level of voice response.

Description

Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing
Technical field
The present invention relates to human-computer intellectualization technical field, in particular to a kind of intelligent sound answering method, one kind Intelligent sound question and answer system, a kind of computer equipment and a kind of computer-readable recording medium.
Background technology
The skills such as intelligent Answer System be integrated use natural language processing, information retrieval, semantic analysis and artificial intelligence A kind of new information service system of art.
In correlation technique, intelligent Answer System is mostly based on Opening field and traditional question and answer standard card cage, following skill be present Art defect:
(1) question answering system of Opening field needs ultra-large knowledge base to make basis, and it directly applies to restriction field In there is cold start-up, can not match the problems such as correct option, while the search problem of user is all colloquial style statement mostly, complete It is complete not to be inconsistent by the data generation model in restriction field with search problem, therefore be difficult to be directly obtained answer needed for user.
(2) current question answering system can not precisely understand that the natural language problem of user is intended to mostly, the emotion of user Tendency does not find full expression.
(3) answer extracting is the calculating based on word frequency similarity or other complicated machine learning models mostly, but this Kind of answer extracting mode is not only computationally intensive, and in face of it is highly sparse the problem of solve when, answer extracting precision will also give a discount greatly Button.
The content of the invention
It is contemplated that at least solves one of technical problem present in prior art or correlation technique.
Therefore, it is an object of the present invention to provide a kind of intelligent sound answering method.
It is another object of the present invention to provide a kind of intelligent sound question and answer system.
It is yet a further object of the present invention to provide a kind of computer equipment.
A further object of the present invention is to provide a kind of computer-readable recording medium.
To achieve these goals, the technical scheme of the first aspect of the present invention provides a kind of intelligent sound question and answer side Method, including:Answer statement is treated in acquisition;It is determined that treat at least one label of answer statement;In presetting database, according at least one Individual label determines to treat the matching set of answer statement;Based on K arest neighbors disaggregated models, it is determined that corresponding with treating answer statement With k sentence in set;The matching degree for treating answer statement and each sentence in k sentence is calculated according to preset rules;Root According to the size of matching degree, k sentence is ranked up, and is sequentially output answer information conduct corresponding to k sentence in sequence Treat the answer information of answer statement.
In the technical scheme, by least one label for treating answer statement for determining to get, then in present count According in storehouse, the matching set of answer statement is treated according to the determination of at least one label, improves the standard to user's inquiry intention assessment Exactness, the extraction scope of answer information is reduced, improve the accuracy that answer information extracts, by based on K arest neighbors classification mould Type, it is determined that k sentence in the matching set corresponding with treating answer statement, further have found and treat that the matching of answer statement is adjacent Nearly k sentence, is advantageous to further save hind computation resource, and answer statement and k language are treated by being calculated according to preset rules The matching degree of each sentence in sentence, improves the accuracy of matching degree, by the size according to matching degree, k sentence is entered Row sequence, and answer information corresponding to k sentence is sequentially output in sequence as the answer information for treating answer statement, further The accuracy of answer information extraction is improved, and then improves the intelligent level of voice response.
Wherein, obtain when answer statement, can be set to receive the phonetic entry of limiting time, for example the time is defined to 30s so that treat the sentence that answer statement is short text type, the accurate realization for extracting answer information can be advantageous to.
In addition, the label for treating answer statement determined can be syntactic label, affective tag, one in the label of field, Can also syntactic label, affective tag, any combination of field label.
Syntactic label is determined according to the syntactic label forecast model prediction of structure training in advance;Affective tag can root , can also be according to advance according to treating the matching degree of word and word in default emotion dictionary determines it is positive or negative sense in answer statement The affective tag forecast model of training is built to determine positive or negative sense, can also give above two determination mode default power Determine that affective tag is that forward direction is 0.8 in positive or negative sense, such as first way again to integrate, negative sense 0.2, second Positive mode is 0.6, negative sense 0.4, the weight 0.5 of first way, the weight 0.5 of the second way, then integrates two kinds of sides Positive after formula is 0.7, negative sense 0.3, then the affective tag for treating answer statement is exactly positive;Field label is remembered by length Type recurrent neural networks model, convolutional neural networks model and softmax regression models are recalled come what is determined, make full use of length to remember Recall the advantages of type recurrent neural networks model is good at obtaining word order information and convolutional neural networks model is good at feature extraction and taken out The advantages of as changing.
In presetting database, determined according at least one label when the matching set of answer statement, can be according to reality Border application is that every kind of label presets weight, such as syntactic label weight 0.3, affective tag weight 0.2, field label weight 0.5, to meet that the respective statement of certain condition forms matching set after weighting.
In addition, when exporting answer information, it can be that text can also be voice, certain soft words can be merged, To increase readability, then exported by phonetic synthesis, answer information can also be recipe, picture etc., can be according to answer information The difference of type is pushed accordingly.
In the above-mentioned technical solutions, it is preferable that preset rules, including:It is determined that treat answer statement with it is each in k sentence The morphology matching degree of sentence;It is determined that treat the word order matching degree of each sentence and standard word order in answer statement or k sentence;Root According to morphology matching degree and word order matching degree, matching degree is determined.
In the technical scheme, pass through the morphology for treating answer statement and each sentence in k sentence according to determination With degree, and the word order matching degree of each sentence treated in answer statement or k sentence determined and standard word order, to determine to match Degree, combines the statistical nature and semantic information for treating answer statement, can more all-sidedly and accurately weigh matching degree.
In any of the above-described technical scheme, it is preferable that according to morphology matching degree and word order matching degree, matching degree is determined, Including:The weight of default morphology matching degree is the first weighted value;The weight of default word order matching degree is the second weighted value, wherein, First weighted value and the second weighted value plus and equal to 1;
According to the first calculation formula, matching degree is determined, wherein, the first calculation formula is SenSim (x)=λ1×TermSim (x,y)+λ2×{Order_sim(baseline,y)}k, TermSim (x, y) is morphology matching degree, λ1For the first weighted value, Order_sim (baseline, y) is word order matching degree, λ2For the second weighted value, k is k sentence, and SenSim (x) is matching Degree.
In the technical scheme, matching degree is determined by the first calculation formula, further increases the complete of matching degree determination Face property and accuracy, wherein, the weight for presetting morphology matching degree is the first weighted value, and the weight for presetting word order matching degree is second Weighted value, the first weighted value and the second weighted value can be more convenient according to conveniently adjustment, use is actually needed.
In order to improve the accuracy that the first calculation formula determines to matching degree, the first weighted value and the second weighted value can roots It is specific as follows using being determined based on the mode of learning of machine according to great amount of samples:
According to the sample set being collected into, using the mode of learning based on machine, sample set is iterated renewal, respectively with λ1=0.1, λ2=0.99 starts iteration, and iterations is 100 times, in an iterative process, by the input value of iterative model and output Value is contrasted, the minimum λ of comparing result otherness1With λ2, the first weighted value and word as corresponding determination morphology matching degree Second weighted value of sequence matching degree, on the one hand, the optimization selection of weighted value is realized, on the other hand, can be according to actual need Conveniently to adjust weight occupies value, and use is more convenient.
Specifically, matching degree comprise at least morphology matching degree, word order matching degree linear and, the mode based on machine learning Determine λ1(the first weighted value) and λ2(the second weighted value), mainly includes:(1) text data set is collected, and text is located in advance Reason, processing procedure include Chinese word segmentation, go the steps such as stop words, text feature, text duplicate removal, the configuration of text Custom Dictionaries Suddenly;(2) pretreated text is put in storage, and each samples of text is marked, to determine most like text, by most Similar text model training, and according to default cutting ratio, such as 7:3, text is distributed into training set, test set;(3) adopt With machine learning algorithm, such as KNN (k-Nearest Neighbor, k arest neighbors sorting algorithm), divide for training set modeling Analysis, set λ1' with 0.01 it is starting point, step-length 0.01, during iteration 100 times (while meet λ1'+λ2'=1), the precision of sample becomes Change scope, and with λ during full accuracy1'、λ2' numerical value as optimal λ1、λ2
In any of the above-described technical scheme, it is preferable that it is determined that treating the word of answer statement and each sentence in k sentence Shape matching degree, including:According to the second calculation formula, morphology matching degree is determined, wherein, the second calculation formula isX is treats answer statement, and y is any sentence in k sentence, xtTo treat back Answer sentence and remove the later sentence of stop words, ytThe later sentence of stop words, s (x are removed for any sentence in k sentencet) be Effective word quantity after answer statement removes stop words, s (yt) be after answer statement remove stop words after effective word quantity, ts(xt,yt) it is after answer statement and the same words number after any sentence removal stop words in k sentence and after removing repetitor Amount, TermSim (x, y) is morphology matching degree.
In the technical scheme, morphology matching degree is determined by the second calculation formula, is removing stop words and repetition respectively After word, and according to the effective word quantity for treating answer statement and any sentence in k sentence, determine morphology matching degree, i.e., two The co-occurrence degree of sentence, on the one hand, calculating process is fairly simple, on the other hand improves the accuracy of calculating.
Specifically, stop words, in information retrieval, to save memory space and improving search efficiency, in processing nature language Some words or word are fallen in meeting automatic fitration before or after speech data, and these words or word are referred to as stop words, and these words or word are all It is manually entered, non-automated generation, the stop words after generation can form a deactivation vocabulary.
Wherein, co-occurrence degree, refer to one of external Cohesion for establishing the contact of sentence border, refer to synonymity word and correlation word more While occur, make sentence is front and rear to connect, form coherent language, occasion contact, causal relation in external Cohesion etc., all Along with term co-occurrence.
In any of the above-described technical scheme, it is preferable that it is determined that treating each sentence and mark in answer statement or k sentence The word order matching degree of quasi- word order, including:According to the 3rd calculation formula, word order matching degree is determined, wherein, the 3rd calculation formula isBaseline is given scenario Standard word order, y are any sentence in k sentence, and invCount (y) is permutation numbers of the y relative to baseline, MaxInvCount (baseline) is baseline maximum permutation number, and n is the word quantity of any sentence y in k sentence.
In the technical scheme, pass through each sentence and the progress of standard word order that will be treated in answer statement or k sentence Match somebody with somebody, realize the detection of word order matching degree, by setting standard word order, will can treat each in answer statement or k sentence Sentence is compared with standard word order, and when sentence is null, word order matching degree is 0, in an only word, word order matching degree For 1, when with n word, according to relative permutation number and the ratio of maximum permutation number, word order matching degree is determined, is realized The determination of word order matching degree, determination mode is simple, and reliability is high.
In any of the above-described technical scheme, it is preferable that it is determined that treat each sentence in answer statement or k sentence with Before the word order matching degree of standard word order, in addition to:The standard word order of given scenario is determined according to the attribute of given scenario;According to Standard word order, generate the thesaurus of given scenario.
In the technical scheme, by determining the standard word order of the scene according to the attribute of given scenario, realize more The determination of the standard word order of individual scene, in the natural language processing application in vertical field, it can be built and stopped according to standard word order Word dictionary and thesaurus, exploitativeness are stronger.
In any of the above-described technical scheme, it is preferable that before it is determined that treating at least one label of answer statement, also wrap Include:It is determined that treat whether include the word to match with default scene lexicon in answer statement;If it is determined that treat answer statement include with The word that default scene lexicon matches, the then word to be matched with presetting in scene lexicon are replaced and treat corresponding word in answer statement; If it is determined that treating not including the word with default scene lexicon matching in answer statement, then cue is sent, and terminate.
In the technical scheme, by determining to treat whether include the word to match with default scene lexicon in answer statement, The anticipation of scene is realized, reduces the possibility for processing that the talk input of user unintentionally is come in, after can effectively saving The computing resource of platform, by it is determined that when answer statement includes the vocabulary to match with default scene lexicon, with default The vocabulary to match in scene lexicon is replaced and treats corresponding vocabulary in answer statement, improves the standardization journey for treating answer statement Degree, influence of the colloquial style statement to matching process is reduced, reduces difficulty of matching, further increases answer information extraction Accuracy.
The technical scheme of second aspect of the present invention provides a kind of intelligent sound question and answer system, including:Acquiring unit, it is used for Answer statement is treated in acquisition;Determining unit, at least one label of answer statement is treated for determination;Determining unit, it is additionally operable to:Pre- If in database, determined to treat the matching set of answer statement according at least one label;Determining unit, it is additionally operable to:It is nearest based on K Adjacent disaggregated model, it is determined that k sentence in the matching set corresponding with treating answer statement;Computing unit, for according to default Rule calculates the matching degree for treating answer statement and each sentence in k sentence;Output unit, for according to the big of matching degree It is small, k sentence is ranked up, and be sequentially output answer information corresponding to k sentence in sequence as treating answer statement Answer information.
In the technical scheme, by least one label for treating answer statement for determining to get, then in present count According in storehouse, the matching set of answer statement is treated according to the determination of at least one label, improves the standard to user's inquiry intention assessment Exactness, the extraction scope of answer information is reduced, improve the accuracy that answer information extracts, by based on K arest neighbors classification mould Type, it is determined that k sentence in the matching set corresponding with treating answer statement, further have found and treat that the matching of answer statement is adjacent Nearly k sentence, is advantageous to further save hind computation resource, and answer statement and k language are treated by being calculated according to preset rules The matching degree of each sentence in sentence, improves the accuracy of matching degree, by the size according to matching degree, k sentence is entered Row sequence, and answer information corresponding to k sentence is sequentially output in sequence as the answer information for treating answer statement, further The accuracy of answer information extraction is improved, and then improves the intelligent level of voice response.
Wherein, obtain when answer statement, can be set to receive the phonetic entry of limiting time, for example the time is defined to 30s so that treat the sentence that answer statement is short text type, the accurate realization for extracting answer information can be advantageous to.
In addition, the label for treating answer statement determined can be syntactic label, affective tag, one in the label of field, Can also syntactic label, affective tag, any combination of field label.
Syntactic label is determined according to the syntactic label forecast model prediction of structure training in advance;Affective tag can root , can also be according to advance according to treating the matching degree of word and word in default emotion dictionary determines it is positive or negative sense in answer statement The affective tag forecast model of training is built to determine positive or negative sense, can also give above two determination mode default power Determine that affective tag is that forward direction is 0.8 in positive or negative sense, such as first way again to integrate, negative sense 0.2, second Positive mode is 0.6, negative sense 0.4, the weight 0.5 of first way, the weight 0.5 of the second way, then integrates two kinds of sides Positive after formula is 0.7, negative sense 0.3, then the affective tag for treating answer statement is exactly positive;Field label is remembered by length Type recurrent neural networks model, convolutional neural networks model and softmax regression models are recalled come what is determined, make full use of length to remember Recall the advantages of type recurrent neural networks model is good at obtaining word order information and convolutional neural networks model is good at feature extraction and taken out The advantages of as changing.
In presetting database, determined according at least one label when the matching set of answer statement, can be according to reality Border application is that every kind of label presets weight, such as syntactic label weight 0.3, affective tag weight 0.2, field label weight 0.5, to meet that the respective statement of certain condition forms matching set after weighting.
In addition, when exporting answer information, it can be that text can also be voice, certain soft words can be merged, To increase readability, then exported by phonetic synthesis, answer information can also be recipe, picture etc., can be according to answer information The difference of type is pushed accordingly.
In the above-mentioned technical solutions, it is preferable that determining unit, be additionally operable to:It is determined that treat answer statement with it is every in k sentence The morphology matching degree of one sentence;Determining unit, it is additionally operable to:It is determined that treat each sentence and standard in answer statement or k sentence The word order matching degree of word order;Determining unit, it is additionally operable to:According to morphology matching degree and word order matching degree, matching degree is determined.
In the technical scheme, pass through the morphology for treating answer statement and each sentence in k sentence according to determination With degree, and the word order matching degree of each sentence treated in answer statement or k sentence determined and standard word order, to determine to match Degree, combines the statistical nature and semantic information for treating answer statement, can more all-sidedly and accurately weigh matching degree.
In any of the above-described technical scheme, it is preferable that also include:Default unit, for presetting the power of morphology matching degree Weight is the first weighted value;Default unit, is additionally operable to:The weight of default word order matching degree is the second weighted value, wherein, the first weight Value adds and equal to 1 with the second weighted value;Determining unit, it is additionally operable to:According to the first calculation formula, matching degree is determined, wherein, the Three formula are SenSim (x)=λ1×TermSim(x,y)+λ2×{Order_sim(baseline,y)}k, TermSim (x, y) For morphology matching degree, λ1For the first weighted value, Order_sim (baseline, y) is word order matching degree, λ2For the second weighted value, k For k sentence, SenSim (x) is matching degree.
In the technical scheme, matching degree is determined by the first calculation formula, further increases the complete of matching degree determination Face property and accuracy, wherein, the weight for presetting morphology matching degree is the first weighted value, and the weight for presetting word order matching degree is second Weighted value, the first weighted value and the second weighted value can be more convenient according to conveniently adjustment, use is actually needed.
In order to improve the accuracy that the first calculation formula determines to matching degree, the first weighted value and the second weighted value can roots It is specific as follows using being determined based on the mode of learning of machine according to great amount of samples:
According to the sample set being collected into, using the mode of learning based on machine, sample set is iterated renewal, respectively with λ1=0.1, λ2=0.99 starts iteration, and iterations is 100 times, in an iterative process, by the input value of iterative model and output Value is contrasted, the minimum λ of comparing result otherness1With λ2, the first weighted value and word as corresponding determination morphology matching degree Second weighted value of sequence matching degree, on the one hand, the optimization selection of weighted value is realized, on the other hand, can be according to actual need Conveniently to adjust weight occupies value, and use is more convenient.
Specifically, matching degree comprise at least morphology matching degree, word order matching degree linear and, the mode based on machine learning Determine λ1(the first weighted value) and λ2(the second weighted value), mainly includes:(1) text data set is collected, and text is located in advance Reason, processing procedure include Chinese word segmentation, go the steps such as stop words, text feature, text duplicate removal, the configuration of text Custom Dictionaries Suddenly;(2) pretreated text is put in storage, and each samples of text is marked, to determine most like text, by most Similar text model training, and according to default cutting ratio, such as 7:3, text is distributed into training set, test set;(3) adopt With machine learning algorithm, such as KNN (k-Nearest Neighbor, k arest neighbors sorting algorithm), divide for training set modeling Analysis, set λ1' with 0.01 it is starting point, step-length 0.01, during iteration 100 times (while meet λ1'+λ2'=1), the precision of sample becomes Change scope, and with λ during full accuracy1'、λ2' numerical value as optimal λ1、λ2
In any of the above-described technical scheme, it is preferable that determining unit, be additionally operable to:According to the second calculation formula, word is determined Shape matching degree, wherein, the second calculation formula isX is to treat answer statement, y k Any sentence in individual sentence, xtFor the sentence later after answer statement removal stop words, ytFor any sentence in k sentence Remove the later sentence of stop words, s (xt) be after answer statement remove stop words after effective word quantity, s (yt) it is language to be answered Sentence removes effective word quantity after stop words, ts (xt,yt) disabled to treat that answer statement removes with any sentence in k sentence After word and the same words quantity after repetitor is removed, TermSim (x, y) is morphology matching degree.
In the technical scheme, morphology matching degree is determined by the second calculation formula, is removing stop words and repetition respectively After word, and according to the effective word quantity for treating answer statement and any sentence in k sentence, determine morphology matching degree, i.e., two The co-occurrence degree of sentence, on the one hand, calculating process is fairly simple, on the other hand improves the accuracy of calculating.
Specifically, stop words, in information retrieval, to save memory space and improving search efficiency, in processing nature language Some words or word are fallen in meeting automatic fitration before or after speech data, and these words or word are referred to as stop words, and these words or word are all It is manually entered, non-automated generation, the stop words after generation can form a deactivation vocabulary.
Wherein, co-occurrence degree, refer to one of external Cohesion for establishing the contact of sentence border, refer to synonymity word and correlation word more While occur, make sentence is front and rear to connect, form coherent language, occasion contact, causal relation in external Cohesion etc., all Along with term co-occurrence.
In any of the above-described technical scheme, it is preferable that determining unit, be additionally operable to:According to the 3rd calculation formula, word is determined Sequence matching degree, wherein, the 3rd calculation formula is Baseline is the standard word order of given scenario, and y is any sentence in k sentence, invCount (y) for y relative to Baseline permutation number, maxInvCount (baseline) are baseline maximum permutation number, and n is appointing in k sentence One sentence y word quantity.
In the technical scheme, pass through each sentence and the progress of standard word order that will be treated in answer statement or k sentence Match somebody with somebody, realize the detection of word order matching degree, by setting standard word order, will can treat each in answer statement or k sentence Sentence is compared with standard word order, and when sentence is null, word order matching degree is 0, in an only word, word order matching degree For 1, when with n word, according to relative permutation number and the ratio of maximum permutation number, word order matching degree is determined, is realized The determination of word order matching degree, determination mode is simple, and reliability is high.
In any of the above-described technical scheme, it is preferable that determining unit, be additionally operable to:Determined according to the attribute of given scenario The standard word order of given scenario;Intelligent sound question and answer system, in addition to:Generation unit, for being specified according to standard word order, generation The thesaurus of scene.
In the technical scheme, by determining the standard word order of the scene according to the attribute of given scenario, realize more The determination of the standard word order of individual scene, in the natural language processing application in vertical field, it can be built and stopped according to standard word order Word dictionary and thesaurus, exploitativeness are stronger.
In any of the above-described technical scheme, it is preferable that determining unit, be additionally operable to:It is determined that treat whether wrapped in answer statement Include the word to match with default scene lexicon;Intelligent sound question and answer system, in addition to:Replacement unit, for it is determined that waiting to answer When sentence includes the word to match with default scene lexicon, answer statement is treated to preset the word to match in scene lexicon replacement In corresponding word;Tip element, for it is determined that treating not including the word with default scene lexicon matching in answer statement, then Cue is sent, and is terminated.
In the technical scheme, by determining to treat whether include the word to match with default scene lexicon in answer statement, The anticipation of scene is realized, reduces the possibility for processing that the talk input of user unintentionally is come in, after can effectively saving The computing resource of platform, by it is determined that when answer statement includes the vocabulary to match with default scene lexicon, with default The vocabulary to match in scene lexicon is replaced and treats corresponding vocabulary in answer statement, improves the standardization journey for treating answer statement Degree, influence of the colloquial style statement to matching process is reduced, reduces difficulty of matching, further increases answer information extraction Accuracy.
The technical scheme of the third aspect of the present invention proposes a kind of computer equipment, and computer equipment includes processor, Processor realizes the technical scheme such as above-mentioned the first aspect of the present invention when being used to perform the computer program stored in memory The step of intelligent sound answering method of any one of proposition.
In the technical scheme, computer equipment includes processor, and processor is used to perform the calculating stored in memory The intelligent sound answering method that any one proposed such as the technical scheme of above-mentioned the first aspect of the present invention is realized during machine program Step, therefore the intelligent sound answering method of any one of the technical scheme proposition of the first aspect with the invention described above is complete Portion's beneficial effect, will not be repeated here.
The technical scheme of the fourth aspect of the present invention proposes a kind of computer-readable recording medium, is stored thereon with calculating Machine program, the intelligence for any one that the technical scheme of the first aspect of the present invention proposes is realized when computer program is executed by processor The step of energy voice response method.
In the technical scheme, computer-readable recording medium is stored thereon with computer program, and computer program is located Reason device realizes the step of intelligent sound answering method for any one that the technical scheme of the first aspect of the present invention proposes when performing, Therefore the whole of the intelligent sound answering method for any one that the technical scheme of the first aspect with the invention described above proposes has Beneficial effect, will not be repeated here.
The additional aspect and advantage of the present invention will provide in following description section, will partly become from the following description Obtain substantially, or recognized by the practice of the present invention.
Brief description of the drawings
The above-mentioned and/or additional aspect and advantage of the present invention will become in the description from combination accompanying drawings below to embodiment Substantially and it is readily appreciated that, wherein:
Fig. 1 shows the schematic flow diagram of intelligent sound answering method according to an embodiment of the invention;
Fig. 2 shows the schematic block diagram of intelligent sound question and answer system according to an embodiment of the invention;
Fig. 3 shows the schematic flow diagram of intelligent sound answering method according to another embodiment of the invention;
Fig. 4 shows the schematic flow diagram of intelligent sound answering method according to still a further embodiment.
Embodiment
It is below in conjunction with the accompanying drawings and specific real in order to be more clearly understood that the above objects, features and advantages of the present invention Mode is applied the present invention is further described in detail.It should be noted that in the case where not conflicting, the implementation of the application Feature in example and embodiment can be mutually combined.
Many details are elaborated in the following description to facilitate a thorough understanding of the present invention, still, the present invention may be used also To be different from other modes described here using other to implement, therefore, protection scope of the present invention is not by described below Specific embodiment limitation.
Embodiment 1
As shown in figure 1, intelligent sound answering method according to an embodiment of the invention, including:Step S102, obtains and treats back Answer sentence;Step S104, it is determined that treating at least one label of answer statement;Step S106, in presetting database, according at least One label determines to treat the matching set of answer statement;Step S108, based on K arest neighbors disaggregated models, it is determined that with language to be answered K sentence in the corresponding matching set of sentence;Step S110, calculated and treated in answer statement and k sentence according to preset rules Each sentence matching degree;Step S112, according to the size of matching degree, k sentence is ranked up, and in sequence successively Answer information corresponding to k sentence is exported as the answer information for treating answer statement.
In this embodiment, by least one label for treating answer statement for determining to get, then in preset data In storehouse, determined to treat the matching set of answer statement according at least one label, improved to the accurate of user's inquiry intention assessment Degree, the extraction scope of answer information is reduced, improve the accuracy that answer information extracts, by based on K arest neighbors disaggregated models, It is determined that k sentence in the matching set corresponding with treating answer statement, further have found the matching for treating answer statement adjacent to k Individual sentence, be advantageous to further save hind computation resource, treated by being calculated according to preset rules in answer statement and k sentence Each sentence matching degree, improve the accuracy of matching degree, by the size according to matching degree, k sentence arranged Sequence, and answer information corresponding to k sentence is sequentially output in sequence as the answer information for treating answer statement, further improve The accuracy that answer information extracts, and then improve the intelligent level of voice response.
Wherein, obtain when answer statement, can be set to receive the phonetic entry of limiting time, for example the time is defined to 30s so that treat the sentence that answer statement is short text type, the accurate realization for extracting answer information can be advantageous to.
In addition, the label for treating answer statement determined can be syntactic label, affective tag, one in the label of field, Can also syntactic label, affective tag, any combination of field label.
Syntactic label is determined according to the syntactic label forecast model prediction of structure training in advance;Affective tag can root , can also be according to advance according to treating the matching degree of word and word in default emotion dictionary determines it is positive or negative sense in answer statement The affective tag forecast model of training is built to determine positive or negative sense, can also give above two determination mode default power Determine that affective tag is that forward direction is 0.8 in positive or negative sense, such as first way again to integrate, negative sense 0.2, second Positive mode is 0.6, negative sense 0.4, the weight 0.5 of first way, the weight 0.5 of the second way, then integrates two kinds of sides Positive after formula is 0.7, negative sense 0.3, then the affective tag for treating answer statement is exactly positive;Field label is remembered by length Type recurrent neural networks model, convolutional neural networks model and softmax regression models are recalled come what is determined, make full use of length to remember Recall the advantages of type recurrent neural networks model is good at obtaining word order information and convolutional neural networks model is good at feature extraction and taken out The advantages of as changing.
In presetting database, determined according at least one label when the matching set of answer statement, can be according to reality Border application is that every kind of label presets weight, such as syntactic label weight 0.3, affective tag weight 0.2, field label weight 0.5, to meet that the respective statement of certain condition forms matching set after weighting.
In addition, when exporting answer information, it can be that text can also be voice, certain soft words can be merged, To increase readability, then exported by phonetic synthesis, answer information can also be recipe, picture etc., can be according to answer information The difference of type is pushed accordingly.
In the above embodiment, it is preferable that preset rules, including:It is determined that treat answer statement and each language in k sentence The morphology matching degree of sentence;It is determined that treat the word order matching degree of each sentence and standard word order in answer statement or k sentence;According to Morphology matching degree and word order matching degree, determine matching degree.
In this embodiment, by treating that answer statement and the morphology of each sentence in k sentence match according to determination Degree, and the word order matching degree of each sentence treated in answer statement or k sentence determined and standard word order, to determine to match Degree, combines the statistical nature and semantic information for treating answer statement, can more all-sidedly and accurately weigh matching degree.
In any of the above-described embodiment, it is preferable that according to morphology matching degree and word order matching degree, determine matching degree, wrap Include:The weight of default morphology matching degree is the first weighted value;The weight of default word order matching degree is the second weighted value, wherein, the One weighted value and the second weighted value plus and equal to 1;
According to the first calculation formula, matching degree is determined, wherein, the first calculation formula is SenSim (x)=λ1×TermSim (x,y)+λ2×{Order_sim(baseline,y)}k, TermSim (x, y) is morphology matching degree, λ1For the first weighted value, Order_sim (baseline, y) is word order matching degree, λ2For the second weighted value, k is k sentence.
In this embodiment, matching degree is determined by the first calculation formula, further increases the comprehensive of matching degree determination Property and accuracy, wherein, the weight for presetting morphology matching degree be the first weighted value, and the weight for presetting word order matching degree is weighed for second Weight values, the first weighted value and the second weighted value can be more convenient according to conveniently adjustment, use is actually needed.
In order to improve the accuracy that the first calculation formula determines to matching degree, the first weighted value and the second weighted value can roots It is specific as follows using being determined based on the mode of learning of machine according to great amount of samples:
According to the sample set being collected into, using the mode of learning based on machine, sample set is iterated renewal, respectively with λ1=0.1, λ2=0.99 starts iteration, and iterations is 100 times, in an iterative process, by the input value of iterative model and output Value is contrasted, the minimum λ of comparing result otherness1With λ2, the first weighted value and word as corresponding determination morphology matching degree Second weighted value of sequence matching degree, on the one hand, the optimization selection of weighted value is realized, on the other hand, can be according to actual need Conveniently to adjust weight occupies value, and use is more convenient.
Specifically, matching degree comprise at least morphology matching degree, word order matching degree linear and, the mode based on machine learning Determine λ1(the first weighted value) and λ2(the second weighted value), mainly includes:(1) text data set is collected, and text is located in advance Reason, processing procedure include Chinese word segmentation, go the steps such as stop words, text feature, text duplicate removal, the configuration of text Custom Dictionaries Suddenly;(2) pretreated text is put in storage, and each samples of text is marked, to determine most like text, by most Similar text model training, and according to default cutting ratio, such as 7:3, text is distributed into training set, test set;(3) adopt With machine learning algorithm, such as KNN (k-Nearest Neighbor, k arest neighbors sorting algorithm), divide for training set modeling Analysis, set λ1' with 0.01 it is starting point, step-length 0.01, during iteration 100 times (while meet λ1'+λ2'=1), the precision of sample becomes Change scope, and with λ during full accuracy1'、λ2' numerical value as optimal λ1、λ2
In any of the above-described embodiment, it is preferable that it is determined that treating the morphology of answer statement and each sentence in k sentence Matching degree, including:According to the second calculation formula, morphology matching degree is determined, wherein, the second calculation formula isX is treats answer statement, and y is any sentence in k sentence, xtTo treat back Answer sentence and remove the later sentence of stop words, ytThe later sentence of stop words, s (x are removed for any sentence in k sentencet) be Effective word quantity after answer statement removes stop words, s (yt) be after answer statement remove stop words after effective word quantity, ts(xt,yt) it is after answer statement and the same words number after any sentence removal stop words in k sentence and after removing repetitor Amount, TermSim (x, y) is morphology matching degree.
In this embodiment, morphology matching degree is determined by the second calculation formula, is removing stop words and repetitor respectively Afterwards, and according to the effective word quantity for treating answer statement and any sentence in k sentence, morphology matching degree, i.e. two languages are determined The co-occurrence degree of sentence, on the one hand, calculating process is fairly simple, on the other hand improves the accuracy of calculating.
Specifically, stop words, in information retrieval, to save memory space and improving search efficiency, in processing nature language Some words or word are fallen in meeting automatic fitration before or after speech data, and these words or word are referred to as stop words, and these words or word are all It is manually entered, non-automated generation, the stop words after generation can form a deactivation vocabulary.
Wherein, co-occurrence degree, refer to one of external Cohesion for establishing the contact of sentence border, refer to synonymity word and correlation word more While occur, make sentence is front and rear to connect, form coherent language, occasion contact, causal relation in external Cohesion etc., all Along with term co-occurrence.
In any of the above-described embodiment, it is preferable that it is determined that treating each sentence and standard in answer statement or k sentence The word order matching degree of word order, including:According to the 3rd calculation formula, word order matching degree is determined, wherein, the 3rd calculation formula isBaseline is given scenario Standard word order, y are any sentence in k sentence, and invCount (y) is permutation numbers of the y relative to baseline, MaxInvCount (baseline) is baseline maximum permutation number, and n is the word quantity of any sentence y in k sentence.
In this embodiment, by will treat that each sentence in answer statement or k sentence is matched with standard word order, The detection of word order matching degree is realized, by setting standard word order, each sentence that will can treat in answer statement or k sentence Compared with standard word order, when sentence is null, word order matching degree is 0, and in an only word, word order matching degree is 1, When with n word, according to relative permutation number and the ratio of maximum permutation number, word order matching degree is determined, realizes word order The determination of matching degree, determination mode is simple, and reliability is high.
In any of the above-described embodiment, it is preferable that it is determined that treating each sentence and mark in answer statement or k sentence Before the word order matching degree of quasi- word order, in addition to:The standard word order of given scenario is determined according to the attribute of given scenario;According to mark Quasi- word order, generate the thesaurus of given scenario.
In this embodiment, by determining the standard word order of the scene according to the attribute of given scenario, realize multiple The determination of the standard word order of scene, in the natural language processing application in vertical field, deactivation can be built according to standard word order Word dictionary and thesaurus, exploitativeness are stronger.
In any of the above-described embodiment, it is preferable that before it is determined that treating at least one label of answer statement, also wrap Include:It is determined that treat whether include the word to match with default scene lexicon in answer statement;If it is determined that treat answer statement include with The word that default scene lexicon matches, the then word to be matched with presetting in scene lexicon are replaced and treat corresponding word in answer statement; If it is determined that treating not including the word with default scene lexicon matching in answer statement, then cue is sent, and terminate.
In this embodiment, it is real by determining to treat whether include the word to match with default scene lexicon in answer statement The anticipation of scene is showed, has reduced the possibility for processing that the talk input of user unintentionally is come in, can effectively save backstage Computing resource, by it is determined that when answer statement includes the vocabulary to match with default scene lexicon, with default field The vocabulary to match in scape lexicon is replaced and treats corresponding vocabulary in answer statement, improves the standardization journey for treating answer statement Degree, influence of the colloquial style statement to matching process is reduced, reduces difficulty of matching, further increases answer information extraction Accuracy.
Embodiment 2
As shown in Fig. 2 intelligent sound question and answer system 200 according to an embodiment of the invention, including:Acquiring unit 202, use Answer statement is treated in acquisition;Determining unit 204, at least one label of answer statement is treated for determination;Determining unit 204, also For:In presetting database, determined to treat the matching set of answer statement according at least one label;Determining unit 204, is also used In:Based on K arest neighbors disaggregated models, it is determined that k sentence in the matching set corresponding with treating answer statement;Computing unit 206, the matching degree of answer statement and each sentence in k sentence is treated for being calculated according to preset rules;Output unit 208, For the size according to matching degree, k sentence is ranked up, and is sequentially output answer corresponding to k sentence in sequence and believes Breath is as the answer information for treating answer statement.
In this embodiment, by least one label for treating answer statement for determining to get, then in preset data In storehouse, determined to treat the matching set of answer statement according at least one label, improved to the accurate of user's inquiry intention assessment Degree, the extraction scope of answer information is reduced, improve the accuracy that answer information extracts, by based on K arest neighbors disaggregated models, It is determined that k sentence in the matching set corresponding with treating answer statement, further have found the matching for treating answer statement adjacent to k Individual sentence, be advantageous to further save hind computation resource, treated by being calculated according to preset rules in answer statement and k sentence Each sentence matching degree, improve the accuracy of matching degree, by the size according to matching degree, k sentence arranged Sequence, and answer information corresponding to k sentence is sequentially output in sequence as the answer information for treating answer statement, further improve The accuracy that answer information extracts, and then improve the intelligent level of voice response.
Wherein, obtain when answer statement, can be set to receive the phonetic entry of limiting time, for example the time is defined to 30s so that treat the sentence that answer statement is short text type, the accurate realization for extracting answer information can be advantageous to.
In addition, the label for treating answer statement determined can be syntactic label, affective tag, one in the label of field, Can also syntactic label, affective tag, any combination of field label.
Syntactic label is determined according to the syntactic label forecast model prediction of structure training in advance;Affective tag can root , can also be according to advance according to treating the matching degree of word and word in default emotion dictionary determines it is positive or negative sense in answer statement The affective tag forecast model of training is built to determine positive or negative sense, can also give above two determination mode default power Determine that affective tag is that forward direction is 0.8 in positive or negative sense, such as first way again to integrate, negative sense 0.2, second Positive mode is 0.6, negative sense 0.4, the weight 0.5 of first way, the weight 0.5 of the second way, then integrates two kinds of sides Positive after formula is 0.7, negative sense 0.3, then the affective tag for treating answer statement is exactly positive;Field label is remembered by length Type recurrent neural networks model, convolutional neural networks model and softmax regression models are recalled come what is determined, make full use of length to remember Recall the advantages of type recurrent neural networks model is good at obtaining word order information and convolutional neural networks model is good at feature extraction and taken out The advantages of as changing.
In presetting database, determined according at least one label when the matching set of answer statement, can be according to reality Border application is that every kind of label presets weight, such as syntactic label weight 0.3, affective tag weight 0.2, field label weight 0.5, to meet that the respective statement of certain condition forms matching set after weighting.
In addition, when exporting answer information, it can be that text can also be voice, certain soft words can be merged, To increase readability, then exported by phonetic synthesis, answer information can also be recipe, picture etc., can be according to answer information The difference of type is pushed accordingly.
In the above embodiment, it is preferable that determining unit 204, is additionally operable to:It is determined that treat in answer statement and k sentence The morphology matching degree of each sentence;Determining unit 204, is additionally operable to:It is determined that treat each sentence in answer statement or k sentence with The word order matching degree of standard word order;Determining unit 204, is additionally operable to:According to morphology matching degree and word order matching degree, it is determined that matching Degree.
In this embodiment, by treating that answer statement and the morphology of each sentence in k sentence match according to determination Degree, and the word order matching degree of each sentence treated in answer statement or k sentence determined and standard word order, to determine to match Degree, combines the statistical nature and semantic information for treating answer statement, can more all-sidedly and accurately weigh matching degree.
In any of the above-described embodiment, it is preferable that also include:Default unit 210, for presetting the power of morphology matching degree Weight is the first weighted value;Default unit 210, is additionally operable to:The weight of default word order matching degree is the second weighted value, wherein, first Weighted value and the second weighted value plus and equal to 1;Determining unit 204, is additionally operable to:According to the first calculation formula, matching degree is determined, Wherein, the 3rd formula is SenSim (x)=λ1×TermSim(x,y)+λ2×{Order_sim(baseline,y)}k, TermSim (x, y) is morphology matching degree, λ1For the first weighted value, Order_sim (baseline, y) is word order matching degree, λ2For Second weighted value, k are k sentence, and SenSim (x) is matching degree.
In this embodiment, matching degree is determined by the first calculation formula, further increases the comprehensive of matching degree determination Property and accuracy, wherein, the weight for presetting morphology matching degree be the first weighted value, and the weight for presetting word order matching degree is weighed for second Weight values, the first weighted value and the second weighted value can be more convenient according to conveniently adjustment, use is actually needed.
In order to improve the accuracy that the first calculation formula determines to matching degree, the first weighted value and the second weighted value can roots It is specific as follows using being determined based on the mode of learning of machine according to great amount of samples:
According to the sample set being collected into, using the mode of learning based on machine, sample set is iterated renewal, respectively with λ1=0.1, λ2=0.99 starts iteration, and iterations is 100 times, in an iterative process, by the input value of iterative model and output Value is contrasted, the minimum λ of comparing result otherness1With λ2, the first weighted value and word as corresponding determination morphology matching degree Second weighted value of sequence matching degree, on the one hand, the optimization selection of weighted value is realized, on the other hand, can be according to actual need Conveniently to adjust weight occupies value, and use is more convenient.
Specifically, matching degree comprise at least morphology matching degree, word order matching degree linear and, the mode based on machine learning Determine λ1(the first weighted value) and λ2(the second weighted value), mainly includes:(1) text data set is collected, and text is located in advance Reason, processing procedure include Chinese word segmentation, go the steps such as stop words, text feature, text duplicate removal, the configuration of text Custom Dictionaries Suddenly;(2) pretreated text is put in storage, and each samples of text is marked, to determine most like text, by most Similar text model training, and according to default cutting ratio, such as 7:3, text is distributed into training set, test set;(3) adopt With machine learning algorithm, such as KNN (k-Nearest Neighbor, k arest neighbors sorting algorithm), divide for training set modeling Analysis, set λ1' with 0.01 it is starting point, step-length 0.01, during iteration 100 times (while meet λ1'+λ2'=1), the precision of sample becomes Change scope, and with λ during full accuracy1'、λ2' numerical value as optimal λ1、λ2
In any of the above-described embodiment, it is preferable that determining unit 204, be additionally operable to:According to the second calculation formula, it is determined that Morphology matching degree, wherein, the second calculation formula isX is to treat answer statement, y For any sentence in k sentence, xtFor the sentence later after answer statement removal stop words, ytFor any language in k sentence Sentence removes the later sentence of stop words, s (xt) be after answer statement remove stop words after effective word quantity, s (yt) it is to wait to answer Sentence removes effective word quantity after stop words, ts (xt,yt) stop to treat that answer statement removes with any sentence in k sentence After word and the same words quantity after repetitor is removed, TermSim (x, y) is morphology matching degree.
In this embodiment, morphology matching degree is determined by the second calculation formula, is removing stop words and repetitor respectively Afterwards, and according to the effective word quantity for treating answer statement and any sentence in k sentence, morphology matching degree, i.e. two languages are determined The co-occurrence degree of sentence, on the one hand, calculating process is fairly simple, on the other hand improves the accuracy of calculating.
Specifically, stop words, in information retrieval, to save memory space and improving search efficiency, in processing nature language Some words or word are fallen in meeting automatic fitration before or after speech data, and these words or word are referred to as stop words, and these words or word are all It is manually entered, non-automated generation, the stop words after generation can form a deactivation vocabulary.
Wherein, co-occurrence degree, refer to one of external Cohesion for establishing the contact of sentence border, refer to synonymity word and correlation word more While occur, make sentence is front and rear to connect, form coherent language, occasion contact, causal relation in external Cohesion etc., all Along with term co-occurrence.
In any of the above-described embodiment, it is preferable that determining unit 204, be additionally operable to:According to the 3rd calculation formula, it is determined that Word order matching degree, wherein, the 3rd calculation formula is Baseline is the standard word order of given scenario, and y is any sentence in k sentence, invCount (y) for y relative to Baseline permutation number, maxInvCount (baseline) are baseline maximum permutation number, and n is appointing in k sentence One sentence y word quantity.
In this embodiment, by will treat that each sentence in answer statement or k sentence is matched with standard word order, The detection of word order matching degree is realized, by setting standard word order, each sentence that will can treat in answer statement or k sentence Compared with standard word order, when sentence is null, word order matching degree is 0, and in an only word, word order matching degree is 1, When with n word, according to relative permutation number and the ratio of maximum permutation number, word order matching degree is determined, realizes word order The determination of matching degree, determination mode is simple, and reliability is high.
In any of the above-described embodiment, it is preferable that determining unit 204, be additionally operable to:Determined according to the attribute of given scenario The standard word order of given scenario;Intelligent sound question and answer system 200, in addition to:Generation unit 212, for according to standard word order, life Into the thesaurus of given scenario.
In this embodiment, by determining the standard word order of the scene according to the attribute of given scenario, realize multiple The determination of the standard word order of scene, in the natural language processing application in vertical field, deactivation can be built according to standard word order Word dictionary and thesaurus, exploitativeness are stronger.
In any of the above-described embodiment, it is preferable that determining unit 204, be additionally operable to:It is determined that treat whether wrapped in answer statement Include the word to match with default scene lexicon;Intelligent sound question and answer system 200, in addition to:
Replacement unit 214, for it is determined that when answer statement includes the word to match with default scene lexicon, with pre- If the word to match in scene lexicon is replaced and treats corresponding word in answer statement;Tip element 216, for it is determined that language to be answered Do not include the word with matching with default scene lexicon in sentence, then send cue, and terminate.
In this embodiment, it is real by determining to treat whether include the word to match with default scene lexicon in answer statement The anticipation of scene is showed, has reduced the possibility for processing that the talk input of user unintentionally is come in, can effectively save backstage Computing resource, by it is determined that when answer statement includes the vocabulary to match with default scene lexicon, with default field The vocabulary to match in scape lexicon is replaced and treats corresponding vocabulary in answer statement, improves the standardization journey for treating answer statement Degree, influence of the colloquial style statement to matching process is reduced, reduces difficulty of matching, further increases answer information extraction Accuracy.
Embodiment 3
Computer equipment according to an embodiment of the invention, computer equipment include processor, and processor is deposited for execution The intelligent sound question and answer side of any one proposed such as above-mentioned embodiments of the invention is realized during the computer program stored in reservoir The step of method.
In this embodiment, computer equipment includes processor, and processor is used to perform the computer stored in memory The step of intelligent sound answering method of any one proposed such as above-mentioned embodiments of the invention is realized during program, therefore with upper Whole beneficial effects of the intelligent sound answering method of any one of embodiments of the invention proposition are stated, will not be repeated here.
Embodiment 4
Computer-readable recording medium according to an embodiment of the invention, it is stored thereon with computer program, computer journey The step of intelligent sound answering method for any one that embodiments of the invention described above propose is realized when sequence is executed by processor.
In this embodiment, computer-readable recording medium, is stored thereon with computer program, and computer program is processed Device realizes the step of intelligent sound answering method for any one that embodiments of the invention described above propose when performing, therefore with upper Whole beneficial effects of the intelligent sound answering method of any one of embodiments of the invention proposition are stated, will not be repeated here.
Embodiment 5
As shown in figure 3, intelligent sound answering method according to another embodiment of the invention, the voice for obtaining user are defeated Enter, then carry out speech recognition, be converted to Chinese character string, in text resolution layer progress Chinese word segmentation, go stop words, text special Signization, the pretreatment of text duplicate removal, instruction identification checking is carried out then into syntactic analysis layer, if checking is not by entering Text synthesizes, then phonetic synthesis, verifies unsanctioned signal language to user speech output, if being verified, is put into semanteme Analysis layer, parsed, the model built according to model application layer, scene classification, Sentiment orientation classification are carried out in semantic analysis layer And the synonymous conversion of keyword, then it is mapped under the special scenes in scene mapping layer, is then retrieved in answer extracting layer special Determine the matching set of scene, particular emotion tendency and specific syntax, then calculating morphology matching degree, word order matching degree, and according to Weight λ1And λ2Comprehensive matching degree is determined, then matching degree is ranked up and screened to realize the effect of filtering answer information, most Answer information, in output, is added soft according to belonging classification, such as T/F answers, video, picture, text output afterwards Text, text synthesis is carried out, then carries out phonetic synthesis, last voice output, can also be according to answer after answer extracting to user Information determines the preference of user, and stores into user preference storehouse, as the data of structure model, can so make output Answer information more meets the needs of user.
Embodiment 6
As shown in figure 4, intelligent sound answering method according to still a further embodiment, is getting phonetic entry Afterwards, it is identified, portal authentication is then carried out by Chinese identification model, if not meeting scene condition, it is determined that is unrelated Scene, non-query warm tip sentence or the non-scene warm tip sentence of query, and termination analysis are exported, if meeting scene bar Part, then emotion recognition is carried out by Sentiment orientation model, subject classification is then carried out by subject scenes disaggregated model, then pass through Answer extracting computation model carries out answer extracting, if there is exception, then termination analysis, otherwise, by answer information, according to scene After soft collected works carry out soft words synthesis, voice output is to user.
Technical scheme is described in detail above in association with accompanying drawing, the present invention proposes a kind of intelligent sound question and answer side Method, device, computer equipment and readable storage medium storing program for executing, by determining language to be answered according at least one label for treating answer statement The matching set of sentence, then in matching set, comprehensive morphology matching degree and word order matching degree, extract the answer for treating answer statement Information, reduce the scope of answer information extraction, reduce the amount of calculation of answer extracting process, saved computing resource, moreover, The accuracy of answer information extraction is improved, improves the intelligent level of voice response.
Step in the inventive method can be according to being actually needed the adjustment of carry out order, merge and delete.
Unit in apparatus of the present invention can be combined, divided and deleted according to being actually needed.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage Medium include read-only storage (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), programmable read only memory (Programmable Read-only Memory, PROM), erasable programmable is read-only deposits Reservoir (Erasable Programmable Read Only Memory, EPROM), disposable programmable read-only storage (One- Time Programmable Read-Only Memory, OTPROM), the electronics formula of erasing can make carbon copies read-only storage (Electrically-Erasable Programmable Read-Only Memory, EEPROM), read-only optical disc (Compact Disc Read-Only Memory, CD-ROM) or other disk storages, magnetic disk storage, magnetic tape storage or can For carrying or any other computer-readable medium of data storage.
The preferred embodiments of the present invention are the foregoing is only, are not intended to limit the invention, for the skill of this area For art personnel, the present invention can have various modifications and variations.Within the spirit and principles of the invention, that is made any repaiies Change, equivalent substitution, improvement etc., should be included in the scope of the protection.

Claims (16)

  1. A kind of 1. intelligent sound answering method, it is characterised in that including:
    Answer statement is treated in acquisition;
    It is determined that at least one label for treating answer statement;
    In presetting database, the matching set of answer statement is treated according to determining at least one label;
    Based on K arest neighbors disaggregated models, it is determined that with k sentence in the matching set for treating that answer statement is corresponding;
    The matching degree of answer statement and each sentence in the k sentence is treated according to calculating preset rules;
    According to the size of the matching degree, the k sentence is ranked up, and is sequentially output the k sentence in sequence Corresponding answer information treats the answer information of answer statement as described in.
  2. 2. intelligent sound answering method according to claim 1, it is characterised in that the preset rules, including:
    It is determined that the morphology matching degree for treating answer statement and each sentence in the k sentence;
    It is determined that the word order matching degree of each sentence treated in answer statement or the k sentence and standard word order;
    According to the morphology matching degree and the word order matching degree, the matching degree is determined.
  3. 3. intelligent sound answering method according to claim 2, it is characterised in that it is described according to the morphology matching degree and The word order matching degree, the matching degree is determined, including:
    The weight for presetting the morphology matching degree is the first weighted value;
    The weight for presetting the word order matching degree is the second weighted value, wherein, first weighted value and second weighted value Plus and equal to 1;
    According to the first calculation formula, the matching degree is determined,
    Wherein, first calculation formula is SenSim (x)=λ1×TermSim(x,y)+λ2×{Order_sim (baseline,y)}k, TermSim (x, y) is the morphology matching degree, λ1For first weighted value, Order_sim (baseline, y) is the word order matching degree, λ2For second weighted value, k is the k sentence, and SenSim (x) is institute State matching degree.
  4. 4. intelligent sound answering method according to claim 2, it is characterised in that treated described in the determination answer statement with The morphology matching degree of each sentence in the k sentence, including:
    According to the second calculation formula, the morphology matching degree is determined,
    Wherein, second calculation formula isX treats answer statement, y to be described For any sentence in the k sentence, xtFor the sentence later after answer statement removal stop words, ytFor the k language Any sentence in sentence removes the later sentence of stop words, s (xt) the having after answer statement removes the stop words for described in Imitate word quantity, s (yt) for effective word quantity after answer statement removes the stop words, ts (xt,yt) treated back to be described Answer after the removal of any sentence in sentence and the k sentence stop words and remove the same words quantity after repetitor, TermSim (x, y) is the morphology matching degree.
  5. 5. intelligent sound answering method according to claim 2, it is characterised in that treated described in the determination answer statement or The word order matching degree of each sentence and standard word order in the k sentence, including:
    According to the 3rd calculation formula, the word order matching degree is determined,
    Wherein, the 3rd calculation formula is The baseline is the standard word order of given scenario, and y is any sentence in the k sentence, and the invCount (y) is For the y relative to the permutation number of the baseline, the maxInvCount (baseline) is the maximum of the baseline Permutation number, the n are the word quantity of any sentence y in the k sentence.
  6. 6. intelligent sound answering method according to claim 2, it is characterised in that treating answer statement described in the determination Or before the word order matching degree of each sentence in the k sentence and standard word order, in addition to:
    The standard word order of the given scenario is determined according to the attribute of given scenario;
    According to the standard word order, the thesaurus of the given scenario is generated.
  7. 7. intelligent sound answering method according to claim 1, it is characterised in that treating answer statement described in the determination At least one label before, in addition to:
    It is determined that described treat whether include the word to match with default scene lexicon in answer statement;
    If it is determined that described treat that answer statement includes the word to match with the default scene lexicon, then with the default scene word The word to match in storehouse treats corresponding word in answer statement described in replacing;
    If it is determined that described treat not including the word with the default scene lexicon matching in answer statement, then prompting letter is sent Number, and terminate.
  8. A kind of 8. intelligent sound question and answer system, it is characterised in that including:
    Acquiring unit, answer statement is treated for obtaining;
    Determining unit, for treating at least one label of answer statement described in determination;
    The determining unit, is additionally operable to:In presetting database, answer statement is treated according to determining at least one label Matching set;
    The determining unit, is additionally operable to:Based on K arest neighbors disaggregated models, it is determined that treating corresponding described of answer statement with described K sentence in matching set;
    Computing unit, for treating of answer statement and each sentence in the k sentence described in being calculated according to preset rules With degree;
    Output unit, for the size according to the matching degree, the k sentence is ranked up, and it is defeated successively in sequence Answer information corresponding to going out the k sentence treats the answer information of answer statement as described in.
  9. 9. intelligent sound question and answer system according to claim 8, it is characterised in that
    The determining unit, is additionally operable to:It is determined that described treat that answer statement and the morphology of each sentence in the k sentence match Degree;
    The determining unit, is additionally operable to:It is determined that each sentence treated in answer statement or the k sentence and standard word order Word order matching degree;
    The determining unit, is additionally operable to:According to the morphology matching degree and the word order matching degree, the matching degree is determined.
  10. 10. intelligent sound question and answer system according to claim 9, it is characterised in that also include:
    Default unit, the weight for presetting the morphology matching degree is the first weighted value;
    The default unit, is additionally operable to:The weight for presetting the word order matching degree is the second weighted value, wherein, first power Weight values add and equal to 1 with second weighted value;
    The determining unit, is additionally operable to:According to the first calculation formula, the matching degree is determined,
    Wherein, first calculation formula is SenSim (x)=λ1×TermSim(x,y)+λ2×{Order_sim (baseline,y)}k, TermSim (x, y) is the morphology matching degree, λ1For first weighted value, Order_sim (baseline, y) is the word order matching degree, λ2For second weighted value, k is the k sentence, and SenSim (x) is institute State matching degree.
  11. 11. intelligent sound question and answer system according to claim 9, it is characterised in that
    The determining unit, is additionally operable to:According to the second calculation formula, the morphology matching degree is determined, wherein, described second calculates Formula isX treats answer statement to be described, and y is any in the k sentence Sentence, xtFor the sentence later after answer statement removal stop words, ytRemove and stop for any sentence in the k sentence The later sentence of word, s (xt) for effective word quantity after answer statement removes the stop words, s (yt) treated to be described Answer statement removes effective word quantity after the stop words, ts (xt,yt) treated to be described in answer statement and the k sentence Any sentence remove after the stop words and remove the same words quantity after repetitor, TermSim (x, y) is the morphology With degree.
  12. 12. intelligent sound question and answer system according to claim 9, it is characterised in that
    The determining unit, is additionally operable to:According to the 3rd calculation formula, the word order matching degree is determined,
    Wherein, the 3rd calculation formula is The baseline is the standard word order of given scenario, and y is any sentence in the k sentence, and the invCount (y) is For the y relative to the permutation number of the baseline, the maxInvCount (baseline) is the maximum of the baseline Permutation number, the n are the word quantity of any sentence y in the k sentence.
  13. 13. intelligent sound question and answer system according to claim 9, it is characterised in that
    The determining unit, is additionally operable to:The standard word order of the given scenario is determined according to the attribute of given scenario;
    The intelligent sound question and answer system, in addition to:
    Generation unit, for according to the standard word order, generating the thesaurus of the given scenario.
  14. 14. intelligent sound question and answer system according to claim 8, it is characterised in that
    The determining unit, is additionally operable to:It is determined that described treat whether include the word to match with default scene lexicon in answer statement;
    The intelligent sound question and answer system, in addition to:
    Replacement unit, for it is determined that described when answer statement includes the word to match with the default scene lexicon, with The word to match in the default scene lexicon treats corresponding word in answer statement described in replacing;
    Tip element, for it is determined that described treat not include the word with the default scene lexicon matching in answer statement, Cue is then sent, and is terminated.
  15. 15. a kind of computer equipment, it is characterised in that the computer equipment includes processor, and the processor is used to perform The intelligent sound answering method as any one of claim 1 to 7 is realized during the computer program stored in memory Step.
  16. 16. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program The step of intelligent sound answering method as any one of claim 1 to 7 is realized when being executed by processor.
CN201710628166.1A 2017-07-28 2017-07-28 Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing Pending CN107688608A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710628166.1A CN107688608A (en) 2017-07-28 2017-07-28 Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710628166.1A CN107688608A (en) 2017-07-28 2017-07-28 Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing

Publications (1)

Publication Number Publication Date
CN107688608A true CN107688608A (en) 2018-02-13

Family

ID=61153060

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710628166.1A Pending CN107688608A (en) 2017-07-28 2017-07-28 Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing

Country Status (1)

Country Link
CN (1) CN107688608A (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108897872A (en) * 2018-06-29 2018-11-27 北京百度网讯科技有限公司 Dialog process method, apparatus, computer equipment and storage medium
CN108920560A (en) * 2018-06-20 2018-11-30 腾讯科技(深圳)有限公司 Generation method, training method, device, computer-readable medium and electronic equipment
CN108932289A (en) * 2018-05-23 2018-12-04 北京华建蓝海科技有限责任公司 One kind being based on the problem of information extraction and deep learning answer treatment method and system
CN108959387A (en) * 2018-05-31 2018-12-07 科大讯飞股份有限公司 Information acquisition method and device
CN108986910A (en) * 2018-07-04 2018-12-11 平安科技(深圳)有限公司 Answering method, device, computer equipment and storage medium on line
CN109065015A (en) * 2018-07-27 2018-12-21 清华大学 A kind of collecting method, device, equipment and readable storage medium storing program for executing
CN109102809A (en) * 2018-06-22 2018-12-28 北京光年无限科技有限公司 A kind of dialogue method and system for intelligent robot
CN109189894A (en) * 2018-09-20 2019-01-11 科大讯飞股份有限公司 A kind of answer extracting method and device
CN109218843A (en) * 2018-09-27 2019-01-15 四川长虹电器股份有限公司 Individualized intelligent phonetic prompt method based on television equipment
CN109583744A (en) * 2018-11-26 2019-04-05 安徽继远软件有限公司 A kind of cross-system account matching system and method based on Chinese word segmentation
CN109710799A (en) * 2019-01-03 2019-05-03 杭州网易云音乐科技有限公司 Voice interactive method, medium, device and calculating equipment
CN110096580A (en) * 2019-04-24 2019-08-06 北京百度网讯科技有限公司 A kind of FAQ dialogue method, device and electronic equipment
CN110162176A (en) * 2019-05-20 2019-08-23 北京百度网讯科技有限公司 The method for digging and device terminal, computer-readable medium of phonetic order
CN110267051A (en) * 2019-05-16 2019-09-20 北京奇艺世纪科技有限公司 A kind of method and device of data processing
CN110334331A (en) * 2019-05-30 2019-10-15 重庆金融资产交易所有限责任公司 Method, apparatus and computer equipment based on order models screening table
CN110489740A (en) * 2019-07-12 2019-11-22 深圳追一科技有限公司 Semantic analytic method and Related product
CN110619041A (en) * 2019-09-16 2019-12-27 出门问问信息科技有限公司 Intelligent dialogue method and device and computer readable storage medium
CN110706536A (en) * 2019-10-25 2020-01-17 北京猿力未来科技有限公司 Voice answering method and device
CN110852110A (en) * 2018-07-25 2020-02-28 富士通株式会社 Target sentence extraction method, question generation method, and information processing apparatus
CN111078972A (en) * 2019-11-29 2020-04-28 支付宝(杭州)信息技术有限公司 Method and device for acquiring questioning behavior data and server
CN111144098A (en) * 2019-12-26 2020-05-12 支付宝(杭州)信息技术有限公司 Recall method and device for expanded question sentence
WO2020133360A1 (en) * 2018-12-29 2020-07-02 深圳市优必选科技有限公司 Question text matching method and apparatus, computer device and storage medium
CN111563150A (en) * 2020-04-30 2020-08-21 广东美的制冷设备有限公司 Air conditioner knowledge base updating method, air conditioner, server and system
CN111597313A (en) * 2020-04-07 2020-08-28 深圳追一科技有限公司 Question answering method, device, computer equipment and storage medium
CN111866610A (en) * 2019-04-08 2020-10-30 百度时代网络技术(北京)有限公司 Method and apparatus for generating information
CN111881695A (en) * 2020-06-12 2020-11-03 国家电网有限公司 Audit knowledge retrieval method and device
CN112668664A (en) * 2021-01-06 2021-04-16 安徽迪科数金科技有限公司 Intelligent voice-based talk training method
CN111563150B (en) * 2020-04-30 2024-04-16 广东美的制冷设备有限公司 Air conditioner knowledge base updating method, air conditioner, server and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120096029A1 (en) * 2009-06-26 2012-04-19 Nec Corporation Information analysis apparatus, information analysis method, and computer readable storage medium
CN103020295A (en) * 2012-12-28 2013-04-03 新浪网技术(中国)有限公司 Problem label marking method and device
CN103218436A (en) * 2013-04-17 2013-07-24 中国科学院自动化研究所 Similar problem retrieving method fusing user category labels and device thereof
CN104516986A (en) * 2015-01-16 2015-04-15 青岛理工大学 Method and device for recognizing sentence
CN104598445A (en) * 2013-11-01 2015-05-06 腾讯科技(深圳)有限公司 Automatic question-answering system and method
CN105760359A (en) * 2014-11-21 2016-07-13 财团法人工业技术研究院 Question processing system and method thereof
CN105843897A (en) * 2016-03-23 2016-08-10 青岛海尔软件有限公司 Vertical domain-oriented intelligent question and answer system
CN106547734A (en) * 2016-10-21 2017-03-29 上海智臻智能网络科技股份有限公司 A kind of question sentence information processing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120096029A1 (en) * 2009-06-26 2012-04-19 Nec Corporation Information analysis apparatus, information analysis method, and computer readable storage medium
CN103020295A (en) * 2012-12-28 2013-04-03 新浪网技术(中国)有限公司 Problem label marking method and device
CN103218436A (en) * 2013-04-17 2013-07-24 中国科学院自动化研究所 Similar problem retrieving method fusing user category labels and device thereof
CN104598445A (en) * 2013-11-01 2015-05-06 腾讯科技(深圳)有限公司 Automatic question-answering system and method
CN105760359A (en) * 2014-11-21 2016-07-13 财团法人工业技术研究院 Question processing system and method thereof
CN104516986A (en) * 2015-01-16 2015-04-15 青岛理工大学 Method and device for recognizing sentence
CN105843897A (en) * 2016-03-23 2016-08-10 青岛海尔软件有限公司 Vertical domain-oriented intelligent question and answer system
CN106547734A (en) * 2016-10-21 2017-03-29 上海智臻智能网络科技股份有限公司 A kind of question sentence information processing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐朝霞: "多特征融合的中文问答系统答案抽取算法", 《贵州大学学报(自然科学版)》 *

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108932289A (en) * 2018-05-23 2018-12-04 北京华建蓝海科技有限责任公司 One kind being based on the problem of information extraction and deep learning answer treatment method and system
CN108959387A (en) * 2018-05-31 2018-12-07 科大讯飞股份有限公司 Information acquisition method and device
CN108920560B (en) * 2018-06-20 2022-10-04 腾讯科技(深圳)有限公司 Generation method, training method, device, computer readable medium and electronic equipment
CN108920560A (en) * 2018-06-20 2018-11-30 腾讯科技(深圳)有限公司 Generation method, training method, device, computer-readable medium and electronic equipment
CN109102809A (en) * 2018-06-22 2018-12-28 北京光年无限科技有限公司 A kind of dialogue method and system for intelligent robot
CN108897872A (en) * 2018-06-29 2018-11-27 北京百度网讯科技有限公司 Dialog process method, apparatus, computer equipment and storage medium
CN108897872B (en) * 2018-06-29 2022-09-27 北京百度网讯科技有限公司 Dialogue processing method, device, computer equipment and storage medium
CN108986910A (en) * 2018-07-04 2018-12-11 平安科技(深圳)有限公司 Answering method, device, computer equipment and storage medium on line
CN108986910B (en) * 2018-07-04 2023-09-05 平安科技(深圳)有限公司 On-line question and answer method, device, computer equipment and storage medium
CN110852110B (en) * 2018-07-25 2023-08-04 富士通株式会社 Target sentence extraction method, question generation method, and information processing apparatus
CN110852110A (en) * 2018-07-25 2020-02-28 富士通株式会社 Target sentence extraction method, question generation method, and information processing apparatus
CN109065015A (en) * 2018-07-27 2018-12-21 清华大学 A kind of collecting method, device, equipment and readable storage medium storing program for executing
CN109189894A (en) * 2018-09-20 2019-01-11 科大讯飞股份有限公司 A kind of answer extracting method and device
CN109189894B (en) * 2018-09-20 2021-03-23 科大讯飞股份有限公司 Answer extraction method and device
CN109218843A (en) * 2018-09-27 2019-01-15 四川长虹电器股份有限公司 Individualized intelligent phonetic prompt method based on television equipment
CN109218843B (en) * 2018-09-27 2020-10-23 四川长虹电器股份有限公司 Personalized intelligent voice prompt method based on television equipment
CN109583744A (en) * 2018-11-26 2019-04-05 安徽继远软件有限公司 A kind of cross-system account matching system and method based on Chinese word segmentation
WO2020133360A1 (en) * 2018-12-29 2020-07-02 深圳市优必选科技有限公司 Question text matching method and apparatus, computer device and storage medium
CN109710799B (en) * 2019-01-03 2021-08-27 杭州网易云音乐科技有限公司 Voice interaction method, medium, device and computing equipment
CN109710799A (en) * 2019-01-03 2019-05-03 杭州网易云音乐科技有限公司 Voice interactive method, medium, device and calculating equipment
CN111866610A (en) * 2019-04-08 2020-10-30 百度时代网络技术(北京)有限公司 Method and apparatus for generating information
CN111866610B (en) * 2019-04-08 2022-09-30 百度时代网络技术(北京)有限公司 Method and apparatus for generating information
CN110096580A (en) * 2019-04-24 2019-08-06 北京百度网讯科技有限公司 A kind of FAQ dialogue method, device and electronic equipment
CN110096580B (en) * 2019-04-24 2022-05-24 北京百度网讯科技有限公司 FAQ conversation method and device and electronic equipment
CN110267051A (en) * 2019-05-16 2019-09-20 北京奇艺世纪科技有限公司 A kind of method and device of data processing
CN110162176B (en) * 2019-05-20 2022-04-26 北京百度网讯科技有限公司 Voice instruction mining method and device, terminal and computer readable medium
CN110162176A (en) * 2019-05-20 2019-08-23 北京百度网讯科技有限公司 The method for digging and device terminal, computer-readable medium of phonetic order
CN110334331A (en) * 2019-05-30 2019-10-15 重庆金融资产交易所有限责任公司 Method, apparatus and computer equipment based on order models screening table
CN110489740B (en) * 2019-07-12 2023-10-24 深圳追一科技有限公司 Semantic analysis method and related product
CN110489740A (en) * 2019-07-12 2019-11-22 深圳追一科技有限公司 Semantic analytic method and Related product
CN110619041A (en) * 2019-09-16 2019-12-27 出门问问信息科技有限公司 Intelligent dialogue method and device and computer readable storage medium
CN110706536A (en) * 2019-10-25 2020-01-17 北京猿力未来科技有限公司 Voice answering method and device
CN111078972B (en) * 2019-11-29 2023-06-16 支付宝(杭州)信息技术有限公司 Questioning behavior data acquisition method, questioning behavior data acquisition device and server
CN111078972A (en) * 2019-11-29 2020-04-28 支付宝(杭州)信息技术有限公司 Method and device for acquiring questioning behavior data and server
CN111144098B (en) * 2019-12-26 2023-05-30 支付宝(杭州)信息技术有限公司 Recall method and device for extended question
CN111144098A (en) * 2019-12-26 2020-05-12 支付宝(杭州)信息技术有限公司 Recall method and device for expanded question sentence
CN111597313A (en) * 2020-04-07 2020-08-28 深圳追一科技有限公司 Question answering method, device, computer equipment and storage medium
CN111597313B (en) * 2020-04-07 2021-03-16 深圳追一科技有限公司 Question answering method, device, computer equipment and storage medium
CN111563150A (en) * 2020-04-30 2020-08-21 广东美的制冷设备有限公司 Air conditioner knowledge base updating method, air conditioner, server and system
CN111563150B (en) * 2020-04-30 2024-04-16 广东美的制冷设备有限公司 Air conditioner knowledge base updating method, air conditioner, server and system
CN111881695A (en) * 2020-06-12 2020-11-03 国家电网有限公司 Audit knowledge retrieval method and device
CN112668664B (en) * 2021-01-06 2022-11-15 安徽迪科数金科技有限公司 Intelligent voice-based conversational training method
CN112668664A (en) * 2021-01-06 2021-04-16 安徽迪科数金科技有限公司 Intelligent voice-based talk training method

Similar Documents

Publication Publication Date Title
CN107688608A (en) Intelligent sound answering method, device, computer equipment and readable storage medium storing program for executing
CN110096570B (en) Intention identification method and device applied to intelligent customer service robot
CN108304372B (en) Entity extraction method and device, computer equipment and storage medium
CN108304375B (en) Information identification method and equipment, storage medium and terminal thereof
CN113011533A (en) Text classification method and device, computer equipment and storage medium
CN112069298A (en) Human-computer interaction method, device and medium based on semantic web and intention recognition
CN106446018B (en) Query information processing method and device based on artificial intelligence
CN112016313B (en) Spoken language element recognition method and device and warning analysis system
CN111274365A (en) Intelligent inquiry method and device based on semantic understanding, storage medium and server
CN111611814B (en) Neural machine translation method based on similarity perception
CN111414513B (en) Music genre classification method, device and storage medium
CN109992775A (en) A kind of text snippet generation method based on high-level semantics
CN111382260A (en) Method, device and storage medium for correcting retrieved text
CN111985228A (en) Text keyword extraction method and device, computer equipment and storage medium
CN111026884A (en) Dialog corpus generation method for improving quality and diversity of human-computer interaction dialog corpus
CN116628186B (en) Text abstract generation method and system
CN110717021A (en) Input text and related device for obtaining artificial intelligence interview
CN112579752A (en) Entity relationship extraction method and device, storage medium and electronic equipment
CN112395891A (en) Chinese-Mongolian translation method combining Bert language model and fine-grained compression
CN110377753B (en) Relation extraction method and device based on relation trigger word and GRU model
Whittaker et al. A statistical classification approach to question answering using web data
CN115238705A (en) Semantic analysis result reordering method and system
CN111125299B (en) Dynamic word stock updating method based on user behavior analysis
CN109815490B (en) Text analysis method, device, equipment and storage medium
CN115171870A (en) Diagnosis guiding and prompting method and system based on m-BERT pre-training model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 230088 Building No. 198, building No. 198, Mingzhu Avenue, Anhui high tech Zone, Anhui

Applicant after: Hefei Hualing Co.,Ltd.

Address before: 230601 R & D building, No. 176, Jinxiu Road, Hefei economic and Technological Development Zone, Anhui 501

Applicant before: Hefei Hualing Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180213