CN105893351A - Speech recognition method and device - Google Patents

Speech recognition method and device Download PDF

Info

Publication number
CN105893351A
CN105893351A CN201610203599.8A CN201610203599A CN105893351A CN 105893351 A CN105893351 A CN 105893351A CN 201610203599 A CN201610203599 A CN 201610203599A CN 105893351 A CN105893351 A CN 105893351A
Authority
CN
China
Prior art keywords
word
targeted
target word
described target
incidence relation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610203599.8A
Other languages
Chinese (zh)
Other versions
CN105893351B (en
Inventor
陈晓敏
陈仲帅
李霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Group Co Ltd
Original Assignee
Hisense Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Group Co Ltd filed Critical Hisense Group Co Ltd
Priority to CN201610203599.8A priority Critical patent/CN105893351B/en
Publication of CN105893351A publication Critical patent/CN105893351A/en
Application granted granted Critical
Publication of CN105893351B publication Critical patent/CN105893351B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/205Parsing
    • G06F40/211Syntactic parsing, e.g. based on context-free grammar [CFG] or unification grammars
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/279Recognition of textual entities
    • G06F40/284Lexical analysis, e.g. tokenisation or collocates

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

Embodiments of the invention provide a speech recognition method and device. The speech recognition method comprises the steps: acquiring a plurality of word segments of a target text corresponding to speech information; determining a target word in the word segments; determining a correlation between each non-target word in the word segments and the target word; determining a correlation degree of each non-target word and the target word according to the correlation of each non-target word and the target word; and according to the correlation degree, determining service requirements of a user. In the embodiments provided by the invention, the service requirements of the user are determined according to the correlation degree of each non-target word and the target word, the correlation degree reflects the intense degree of desiring to obtain the service content corresponding to the service attribute of the target word by the user, the correlation degrees between different service attributes and the user requirements are different, and real service requirements of the user can be determined according to the correlation degrees; compared with determining a target service corresponding to the speech information by simply utilizing core words or the number of the word segments recognized by each service, the accuracy of analyzing the user requirements in the speech information is improved.

Description

Audio recognition method and device
Technical field
The present embodiments relate to technical field of voice recognition, particularly relate to a kind of audio recognition method and dress Put.
Background technology
Speech recognition is very important human-computer interaction technology, and speech recognition technology has had become as and ground Study carefully focus.
Prior art obtains the core word in voice messaging by morphological analysis, syntactic analysis and semantic analysis, Determine, by core word, the target service that voice messaging is corresponding, or by morphological analysis, syntactic analysis and Semantic analysis obtains the multiple participles in voice messaging, determines the individual of the default discernible participle of each business Number, the relatively size of the number of the discernible participle of each business determine the target service that voice messaging is corresponding.
Determine, by core word, the target service that voice messaging is corresponding merely, may finally there will be multiple Target service, user needs to check each target service successively, and determines whether each target service is that it is true The most desired business;Voice messaging pair is determined merely from the size of the number of the discernible participle of each business The method of the target service answered does not gets a real idea of user and sends the intention of this voice messaging, causes really Fixed target service is not likely to be the real desired business of user;Therefore, prior art cannot accurately be divided Separate out the user's request in voice messaging.
Summary of the invention
The embodiment of the present invention provides a kind of audio recognition method and device, accurately to analyze in voice messaging User's request.
One aspect of the embodiment of the present invention is to provide a kind of audio recognition method, including:
Obtain multiple participles of target text corresponding to voice messaging;
Determine the target word in the plurality of participle;
According to the service attribute of described target word, determine each non-targeted word and described mesh in the plurality of participle The incidence relation of mark word;
According to the incidence relation of described each non-targeted word Yu described target word, determine described each non-targeted word with The degree of association of described target word;
According to the degree of association of described each non-targeted word Yu described target word, determine the business demand of user.
Another aspect of the embodiment of the present invention is to provide a kind of speech recognition equipment, including:
Word segmentation processing module, for obtaining multiple participles of target text corresponding to voice messaging;
Target word determines module, for determining the target word in the plurality of participle;
Incidence relation determines module, for the service attribute according to described target word, determines the plurality of point Each non-targeted word and the incidence relation of described target word in word;
Calculation of relationship degree module, for the incidence relation according to described each non-targeted word Yu described target word, Determine the degree of association of described each non-targeted word and described target word;
Business demand determines module, for the degree of association according to described each non-targeted word Yu described target word, Determine the business demand of user.
The audio recognition method of embodiment of the present invention offer and device, by analyzing corresponding the dividing of voice messaging Word, determines the target word in participle and non-targeted word, determines each non-targeted according to the service attribute of target word Word and the incidence relation of target word, determine each non-targeted according to the incidence relation of each non-targeted word Yu target word Word and the degree of association of target word, so that it is determined that the degree of association between service attribute and user's request, this association Degree reflects that user expects to obtain the intensity of business tine corresponding to this service attribute of this target word, The degree of association between different service attributes and user's request is different, can determine that use by degree of association size The business demand that family is real, compared to determining, by core word, the target service that voice messaging is corresponding merely, Or determine, from the size of the number of the discernible participle of each business, the target service that voice messaging is corresponding merely, Improve and analyze the accuracy of user's request in voice messaging.
Accompanying drawing explanation
The audio recognition method flow chart that Fig. 1 provides for the embodiment of the present invention;
The audio recognition method flow chart that Fig. 2 provides for another embodiment of the present invention;
The audio recognition method flow chart that Fig. 3 provides for another embodiment of the present invention;
The audio recognition method flow chart that Fig. 4 provides for another embodiment of the present invention;
The structure chart of the speech recognition equipment that Fig. 5 provides for the embodiment of the present invention;
The structure chart of the speech recognition equipment that Fig. 6 provides for another embodiment of the present invention.
Detailed description of the invention
The audio recognition method flow chart that Fig. 1 provides for the embodiment of the present invention.Prior art passes through analytic language The key word of message breath, goes out the search key expanded, according to key word or expansion according to key word analysis Search key carry out retrieving the retrieval result that obtains may be for multiple, user needs to check successively each Retrieval result, to judge the Query Result whether this retrieval result is really wanted as it, therefore, reduces Accuracy of identification to user's request, for this technical problem, embodiments provides speech recognition side Method, the executive agent of the audio recognition method that the embodiment of the present invention provides can be speech recognition equipment, also Can be terminal unit or the server for performing above-mentioned audio recognition method.Concrete, this terminal sets Standby can be intelligent television, smart mobile phone, panel computer, notebook computer, the calculating of super mobile personal Machine (English: Ultra-mobile Personal Computer, UMPC), net book, individual it is called for short: Personal digital assistant (English: Personal Digital Assistant, PDA) it is called for short: the terminal unit such as. Wherein, speech recognition equipment can be central processing unit in above-mentioned terminal unit or server (English: Central Processing Unit, is called for short CPU) or can be in above-mentioned terminal unit or server Control unit or functional module.
Specifically comprising the following steps that of the audio recognition method that the embodiment of the present invention provides
Step S101, multiple participles of the target text that acquisition voice messaging is corresponding;
The executive agent of the embodiment of the present invention can be the equipment with certain disposal ability, such as server, This server and the equipment such as mobile terminal with communication function carry out radio communication, and this mobile terminal is recorded The natural language of user processed input, and voice messaging corresponding for this natural language is sent to server, should Described voice messaging is converted into target text by server, and the method for conversion can use of the prior art Any method, further, this server carries out word segmentation processing and obtains multiple participles target text, The embodiment of the present invention does not limit the method that target text is carried out word segmentation processing, specifically can use forward Big matching process.Such as, the target text that the natural language of user's input is corresponding is " I wants to listen griggles ", Use Forward Maximum Method method " I wants to listen griggles " is decomposed into " I ", " thinking ", " listening ", " griggles ".
Step S102, the target word determined in the plurality of participle;
In this server, storage has multiple business, such as, and video display business, music services, novel business etc., Determine the target word of this business support according to each business, such as, video display business includes " thinkling sound's Ya list ", " little Fructus Mali pumilae " etc. movie and video programs, music services includes the music songs such as " lustily water ", " griggles ", little The business of saying includes the novel such as " thinkling sound's Ya list ", " the perfect world ", then " thinkling sound's Ya list " and " griggles " Being the target word of video display business support, " lustily water " and " griggles " are the targets that music services is supported Word, " thinkling sound's Ya list " and " the perfect world " is the target word of novel business support, and this server is by each industry The target word that business is supported, and service attribute corresponding to this target word be stored in knowledge base, then this knowledge base Middle storage has the different business attribute of same target word, and such as, " griggles " are both to video display should be had to belong to Property, also to there being music attribute, " thinkling sound's Ya list " both to there being video display attribute, also to there being novel attribute.
After this server obtains out multiple participles of target text, inquire about this knowledge base according to each participle, If this participle mates with the target word in this knowledge base, then using this participle as the target word of this target text, Such as, target text " I wants to listen griggles " is decomposed into " I ", " thinking " " listening " " griggles " 4 Individual participle, participle " griggles " is the target word in knowledge base, participle " I ", " thinking ", " listening " It not the target word in knowledge base, then by " griggles " as target text " I wants to listen griggles " Target word, " I ", " thinking ", " listening " are as the non-targeted of target text " I wants to listen griggles " Word.
Step S103, service attribute according to described target word, determine each non-targeted in the plurality of participle Word and the incidence relation of described target word;
In this knowledge base, each service attribute of corresponding same target word stores relevant word, this conjunctive word Including qualifier, verb, measure word, interrogative, intention word, personal pronoun etc., such as, for " thinkling sound Ya list " video display attribute, " Hu Ge " is the qualifier of " thinkling sound's Ya list ", and " seeing " is " thinkling sound's Ya list " Verb, " the 8th collection " is the measure word of " thinkling sound's Ya list ", " being ", " either with or without " be " thinkling sound Ya list " interrogative, " thinking ", " expectation " are the intention words of " thinkling sound's Ya list ", and " I " is " thinkling sound Ya list " personal pronoun etc..
It addition, the type of conjunctive word determines the incidence relation of this conjunctive word and this target word, such as, " recklessly Song " it is the qualifier of " thinkling sound's Ya list ", " Hu Ge " and " thinkling sound's Ya list " is qualifier relation;" see " Being the verb of " thinkling sound's Ya list ", " seeing " and " thinkling sound's Ya list " is verb relation;" the 8th collection " is " thinkling sound Ya list " measure word, " the 8th collection " and " thinkling sound's Ya list " is measure word relation.
In like manner, for target word " griggles ", in this knowledge base, storage has the pass that its video display attribute is corresponding Connection word, and the conjunctive word that music attribute is corresponding, conjunctive word that this video display attribute is corresponding and this music attribute Corresponding conjunctive word can be different, such as, for the music attribute of " griggles ", " listening " is " little Fructus Mali pumilae " verb, " seeing " is not the verb of " griggles ", for the video display attribute of " griggles ", " listening " is not the verb of " griggles ", and " seeing " is the verb of " griggles ".By " griggles " The type of conjunctive word can determine that the incidence relation of this conjunctive word and " griggles ".
Such as, for the music attribute of " griggles ", non-targeted word " I " is determined by this knowledge base Being the personal pronoun of " griggles ", the incidence relation between " I " and " griggles " is personal pronoun Relation;Non-targeted word " thinks " it is the intention word of " griggles ", between " thinking " and " griggles " Incidence relation is intended to word relation;It is the verb of " griggles " that non-targeted word " is listened ", " listening " and " little Fructus Mali pumilae " between incidence relation be verb relation.
For the video display attribute of " griggles ", non-targeted word " I " is the personal pronoun of " griggles ", Incidence relation between " I " and " griggles " is personal pronoun relation;Non-targeted word " thinks " it is " little Fructus Mali pumilae " intention word, the incidence relation between " thinking " and " griggles " is intended to word relation;Non-mesh It is not the verb of " griggles " that mark word " is listened ", does not has incidence relation between " listening " and " griggles ".
Step S104, incidence relation according to described each non-targeted word Yu described target word, determine described respectively Non-targeted word and the degree of association of described target word;
For the music attribute of " griggles ", " I ", " thinking " that determine according to above-mentioned steps S103, " listen " respectively with the incidence relation of " griggles ", determine that " I ", " thinking ", " listening " are with " little Fructus Mali pumilae " the degree of association 1;For the video display attribute of " griggles ", determine according to above-mentioned steps S103 " I ", " thinking ", " listening " respectively with the incidence relation of " griggles ", determine " I ", " thinking ", " listening " degree of association 2 with " griggles ", concrete calculation of relationship degree method is by following embodiment Middle detailed description.
Step S105, the degree of association according to described each non-targeted word Yu described target word, determine the industry of user Business demand;
For the different business attribute of same target word, each non-targeted word can be obtained different from target word The degree of association, such as, for the music attribute of " griggles ", be between " listening " and " griggles " Word relation, for the video display attribute of " griggles ", does not associate pass between " listening " with " griggles " System, therefore, the degree of association 1 and the degree of association 2 determined according to step S104 differ, according to the degree of association 1 and the size of the degree of association 2 can determine that the business demand of user, if the degree of association 1 is more than the degree of association 2, then The business demand determining user is song " griggles ", and this server directly returns " little to mobile terminal Fructus Mali pumilae " song files.
According to prior art, by the semantic analysis of target text " I wants to listen griggles ", grammer are divided Analysis determines that key word is " griggles ", or extends the inspection of expansion according to this key word " griggles " Rope key word such as " video display " " music ", according to " griggles ", " video display ", " music " search Obtaining multiple result, including song files " griggles " and video files " griggles ", user needs The Query Result selecting it really to want from multiple retrieval results, and the embodiment of the present invention can accurately be known User expects most the business such as song " griggles " obtained, and this server directly returns " little to mobile terminal Fructus Mali pumilae " song files.
The embodiment of the present invention, by analyzing participle corresponding to voice messaging, determines the target word in participle and non- Target word, determines the incidence relation of each non-targeted word and target word, foundation according to the service attribute of target word The incidence relation of each non-targeted word and target word determines the degree of association of each non-targeted word and target word, thus really Determining the degree of association between service attribute and user's request, this degree of association reflects that user expects to obtain this target The intensity of the business tine that this service attribute of word is corresponding, different service attributes and user's request it Between the degree of association different, can determine that, by degree of association size, the business demand that user is real, compared to list Pure determined the target service that voice messaging is corresponding by core word, or simple from the discernible participle of each business The size of number determine the target service that voice messaging is corresponding, improve user in analysis voice messaging needs The accuracy asked.
The audio recognition method flow chart that Fig. 2 provides for another embodiment of the present invention.At above-described embodiment On the basis of, specifically comprising the following steps that of the audio recognition method that the embodiment of the present invention provides
Step S201, multiple participles of the target text that acquisition voice messaging is corresponding;
Step S202, the target word determined in the plurality of participle;
Step S201 is consistent with the method for step S101, and step S202 is consistent with the method for step S102, Here is omitted.
Step S203, service attribute according to described target word, determine the non-targeted on the left of described target word Word and the incidence relation of described target word, and/or, determine that the non-targeted word on the right side of described target word is with described The incidence relation of target word;
In the above-described embodiments, target text is " I wants to listen griggles ", and target word is " griggles ", Non-targeted word is " I ", " thinking ", " listening ", determines each non-targeted word and institute in the plurality of participle State the incidence relation of target word method particularly includes: start to determine " listening " successively from the left side of " griggles " With the incidence relation of " griggles ", " thinking " with the incidence relation of " griggles ", " I " with " little Fructus Mali pumilae " incidence relation.
And for example, target text is " thinkling sound's Ya list first collects ", and target word is " thinkling sound's Ya list ", non-targeted word It is " the first collection ", starts the pass determining " the first collection " with " thinkling sound's Ya list " from the right side of " thinkling sound's Ya list " Connection relation.
For another example, target text is " thinkling sound's Ya list first of Hu Ge collects ", and target word is " thinkling sound's Ya list ", non- Target word is " Hu Ge " and " the first collection ", centered by " thinkling sound's Ya list ", respectively from the right and the left side Determine the incidence relation of each non-targeted word and target word, the determination method of incidence relation and above-mentioned steps S103 Unanimously, here is omitted.
Step S204, incidence relation according to described each non-targeted word Yu described target word, determine described respectively Non-targeted word and the degree of association of described target word;
Step S205, the degree of association according to described each non-targeted word Yu described target word, determine the industry of user Business demand.
Step S204 is consistent with the method for step S104, and step S205 is consistent with the method for step S105.
The embodiment of the present invention starts the incidence relation determining non-targeted word with target word from the left side of target word, And/or, start the incidence relation determining non-targeted word with target word from the right side of target word, it is ensured that each non- The order of the incidence relation of target word and target word.
The audio recognition method flow chart that Fig. 3 provides for another embodiment of the present invention.At above-described embodiment On the basis of, specifically comprising the following steps that of the audio recognition method that the embodiment of the present invention provides
Step S301, multiple participles of the target text that acquisition voice messaging is corresponding;
Step S302, the target word determined in the plurality of participle;
Step S301 is consistent with the method for step S101, and step S302 is consistent with the method for step S102, Here is omitted.
Step S303, inquire about relation table according to non-targeted word each in the plurality of participle and described target word, Described relation table includes the dependence with reference to non-targeted word with described target word;
In embodiments of the present invention, relation table is the most as shown in table 1:
Table 1
This relation table includes service attribute corresponding to target word, target word, every kind of service attribute correspondence target Word multiple with reference to non-targeted words, with reference to non-targeted word include qualifier, verb, measure word, interrogative and Being intended to word, each class can include multiple word with reference to non-targeted word, and this relation table can be in above-described embodiment Knowledge base, the embodiment of the present invention does not limit the kind with reference to non-targeted word, such as, this reference non-targeted Word can also include coordination words and phrases, adjective, adverbial word etc..It addition, the type with reference to non-targeted word can be true This reference non-targeted word fixed and the dependence of " griggles ", such as, " qualifier " and " griggles " Dependence be qualifier relation, the dependence of " verb " and " griggles " is verb relation.
For target text " I wants to listen griggles ", according to non-targeted word " I ", " thinking ", " listening " Inquire about this relation table with target word " griggles ", determine each industry of corresponding " griggles " in this relation table Whether business attribute exists the reference non-targeted word mated with " I ", " thinking ", " listening ".
If step S304 described non-targeted word mates with the reference non-targeted word in described relation table, then institute Stating non-targeted word and the incidence relation of described target word is described depending on reference to non-targeted word and described target word The relation of relying;
Such as, for the music attribute of " griggles ", the personal pronoun of this relation table includes " I ", Then the incidence relation between " I " and " griggles " is personal pronoun relation;It is intended to word and includes " thinking ", Then " think " that the incidence relation between " griggles " is intended to word relation;Verb includes " listening ", Then " listening " incidence relation between " griggles " is verb relation.
For the video display attribute of " griggles ", the personal pronoun of this relation table includes " I ", then " I " Incidence relation between " griggles " is personal pronoun relation;It is intended to word include " thinking ", then " thinks " Incidence relation between " griggles " is intended to word relation;Verb does not includes " listening ", then " listen " Incidence relation is not had between " griggles ".
Step S305, incidence relation according to described each non-targeted word Yu described target word, determine described respectively Non-targeted word and the degree of association of described target word;
Step S306, the degree of association according to described each non-targeted word Yu described target word, determine the industry of user Business demand.
Step S305 is consistent with the method for step S104, and step S306 is consistent with the method for step S105.
The embodiment of the present invention determines the incidence relation of each non-targeted word and target word by inquiry relation table, carries The high search efficiency of incidence relation.
The audio recognition method flow chart that Fig. 4 provides for another embodiment of the present invention.In any of the above-described enforcement On the basis of example, according to the incidence relation of described each non-targeted word Yu described target word, determine described each non- The degree of association of target word and described target word specifically includes following steps:
Step S401, incidence relation according to described each non-targeted word Yu described target word determine described association The first score value that relation is corresponding;
Described first score value includes factor I and factor Ⅱ;Factor I is specially factor of influence mark, Factor Ⅱ is specially coupling target probability mark.
Step S402, calculate associating of described each non-targeted word and described target word according to described first score value Degree.
The described degree of association calculating described each non-targeted word and described target word according to described first score value, bag Include following steps:
(1) factor I corresponding for first non-targeted word in described target text is multiplied by described target word The second score value obtain the first intermediate value;
Assuming that in target text the collection of the non-targeted word that target word is corresponding be combined into participle 1, participle 2 ..., Participle N}, N are the sum of non-targeted word;By the factor of influence mark P of participle 11It is multiplied by described target word The second score value, the second score value of described target word according to the length of described target word, described target word with The degree of dependence of described non-targeted word determines.Concrete, embodiments provide each target word from Body rule, this self rule includes rule 1, rule 2, rule 3, and wherein, rule 1 for target word is No needs individually occur;Rule 2 is that target word occurs the need of with default non-targeted word simultaneously;Rule Then 3 is whether the length of target word is more than threshold value.
Knowledge base prestores the first kind target word needing individually occur and has needed and the non-mesh preset The Equations of The Second Kind target word that mark word occurs simultaneously, and the needs that in Equations of The Second Kind target word, each target word is corresponding are same Time occur preset non-targeted word.Preferably, first kind target word and Equations of The Second Kind target word are not occured simultaneously, The priority of above-mentioned 3 rules is successively decreased successively, self score value that is second score value of each target word according to this 3 Rule determines.
As a example by target word " griggles ", second score value of " griggles ", to there being initial value, determines The concrete grammar of " griggles " second score value in target text is as follows:
Step 1: according to regular 1 search knowledge base, determines that " griggles " belong to first kind target word, Then judge whether the target text that voice messaging that user inputs is corresponding only has " griggles " word, as Fruit is, performs step 2, otherwise jumps to step 3;
Step 2: " griggles " second score value in target text keeps initial value, and jumps to step Rapid 4;
Step 3: " griggles " second score value in target text is set to preset value;
Such as, target text is " I wants to listen griggles ", target text also include except " griggles " it Outer word, then be set to preset value by " griggles " second score value in target text, this preset value Less than initial value, the most no longer redirect other steps.
Step 4: according to regular 2 search knowledge bases, determines that " griggles " are not belonging to Equations of The Second Kind target word, Perform step 5;
Step 5: judge whether the length of " griggles " is more than threshold value, if so, " griggles " are at mesh Mark text in second score value keep initial value, otherwise, by " griggles " in target text second Score value is set to preset value.
Visible, if the second score value that target word is in target text keeps initial value, then continue executing with follow-up Step, if the second score value that target word is in target text is set to preset value, the most no longer performs follow-up step Suddenly.
As a example by target word " today ", second score value of " today ", to there being initial value, determines " modern My god " concrete grammar of the second score value in target text is as follows:
Step 1: according to regular 1 search knowledge base, determines that " today " is not belonging to first kind target word, Forward step 2 to;
Step 2: according to regular 2 search knowledge bases, determines that " today " belongs to Equations of The Second Kind target word, enters One step, determine whether target text occurs in that target word " today " corresponding need to occur simultaneously pre- If non-targeted word, it is assumed that in knowledge base, storage has the default non-targeted word simultaneously occurred with " today ", should Preset non-targeted word and include " Liu Dehua ", " Song Huiqiao ", if target text at least occurs in that " Liu Moral China ", in " Song Huiqiao " one, then perform step 3, otherwise, perform step 4;
Step 3: " today ", the second score value in target text kept initial value;And forward step 5 to;
Such as, target text is " today of Liu Dehua ", then " today " in target text second Score value keeps its initial value;
Step 4: " today " second score value in target text is set to preset value, this preset value Less than initial value;
Such as, target text is " tomorrow today ", does not the most occur " Liu Dehua " in target text, The most do not occur " Song Huiqiao ", then by target word " today " at the of target text " tomorrow today " Two score values are set to preset value.No longer redirect other steps so far.
Step 5: judge whether the length of " today " is more than threshold value, if so, " today " is at target literary composition The second score value in Ben keeps initial value, otherwise, is set by " today " second score value in target text It is set to preset value.
Determine the second score value of target word according to said method after, by the factor of influence mark P of participle 11Take advantage of The first intermediate value tempscore1 is obtained with the second score value of target word, if described first intermediate value is more than 1, Then described first intermediate value is set to 1, if described first intermediate value is less than 1, then keeps in the middle of described first The size of value is constant.
(2) factor I corresponding for second non-targeted word in described target text is multiplied by described first Between value obtain the second intermediate value;
By the factor of influence mark P of participle 22It is multiplied by described first intermediate value tempscore1 to obtain in the middle of second Value tempscore2, if described second intermediate value is more than 1, is then set to 1 by described second intermediate value, if institute State the second intermediate value and be less than 1, then the size keeping described second intermediate value is constant.
(3) traveling through described second non-targeted word i-th non-targeted word below, 3≤i≤N, by described The factor I that i non-targeted word is corresponding is multiplied by the i-th-1 intermediate value and obtains the i-th intermediate value, and N represents described mesh The sequence number of last non-targeted word in mark text;
Travel through the participle after described participle 2 successively until participle N, it is assumed that current traversal is participle i, 3≤i≤N, by the factor of influence mark P of described participle iiIt is multiplied by the i-th-1 intermediate value tempscore (i-1) to obtain I-th intermediate value tempscorei, if described i-th intermediate value is more than 1, is then set to 1 by described i-th intermediate value, If described i-th intermediate value is less than 1, then the size keeping described i-th intermediate value is constant, the like, directly To calculating N intermediate value tempscoreN.
(4) it is averaging acquisition meansigma methods after being added by factor Ⅱ corresponding respectively for described each non-targeted word;
It is averaging acquisition all after the coupling target probability mark of each participle is added in described participle set Value relscore.
(5) degree of association of described each non-targeted word and described target word equal to N intermediate value with described averagely The product of value.
Each non-targeted word and degree of association finalscore=relscore*tempscoreN of described target word.
Such as, target text " I wants to listen griggles ", the first score value of non-targeted word " I " includes shadow Ring factor mark P1With coupling target probability mark M1, the first score value of " thinking " of non-targeted word include shadow Ring factor mark P2With coupling target probability mark M2, first score value of " listening " includes factor of influence Mark P3With coupling target probability mark M3, by the factor of influence mark P of " I "1It is multiplied by " griggles " The second score value of self obtains the first intermediate value tempscore1, if described first intermediate value is more than 1, then will Described first intermediate value is set to 1, if described first intermediate value is less than 1, then keeps described first intermediate value Size is constant;The factor of influence mark P that will " think "2It is multiplied by described first intermediate value tempscore1 and obtains the Two intermediate value tempscore2, if described second intermediate value is more than 1, are then set to 1 by described second intermediate value, If described second intermediate value is less than 1, then the size keeping described second intermediate value is constant;To " listen " Factor of influence mark P3It is multiplied by described first intermediate value tempscore2 and obtains the 3rd intermediate value tempscore3, if Described 3rd intermediate value is more than 1, then described 3rd intermediate value is set to 1, if described 3rd intermediate value is less than 1, then the size keeping described 3rd intermediate value is constant.By the coupling target probability mark M of " I "1、 The coupling target probability mark M " thought "2The coupling target probability mark M " listened "3After addition It is averaging acquisition average relscore, i.e. relscore=(M1+M2+M3)/3。
For the music attribute of " griggles ", in " I wants to listen griggles ", non-targeted word is " little with target word Fructus Mali pumilae " the degree of association 1 be finalscore=relscore*tempscoreN;In like manner, can calculate for " little Fructus Mali pumilae " video display attribute, the pass of non-targeted word and target word " griggles " in " I wants to listen griggles " Connection degree 2, here is omitted.
The corresponding multiple service attributes of described target word, each service attribute correspondence respectively has the described degree of association;By In the music attribute for " griggles ", it is verb relation between " listening " and " griggles ", for The video display attribute of " griggles ", does not has incidence relation between " listening " and " griggles ", therefore, and root The degree of association 1 and the degree of association 2 determined according to said method differ.
The degree of association according to described each non-targeted word Yu described target word in above-described embodiment, determines user's Business demand specifically includes: according to the described degree of association that each service attribute is the most corresponding, determine maximum The target service attribute that the degree of association is corresponding, using belong to described target service attribute business as described user The business of demand.
For target text " I wants to listen griggles ", target word " griggles " has music attribute simultaneously With video display attribute, each non-targeted word in target text " I wants to listen griggles " can be obtained for each attribute With the degree of association of described target word, then target text " I wants to listen griggles " is to there being two degrees of association, If two degrees of association are all higher than first threshold, then two degrees of association are put into a list, and by this list In score value sort according to order from big to small, if degree of association corresponding to " music attribute " is more than " shadow Depending on attribute " the corresponding degree of association, then it is right that the degree of association that " music attribute " is corresponding comes " video display attribute " Before the degree of association answered, the music file of " griggles " is as the business demand of user, if this list In come the deputy degree of association more than Second Threshold, then by service attribute corresponding for the deputy degree of association Affiliated business alternately business demand, i.e. the video files of " griggles " is as the alternative industry of user Business demand, in order to the business demand of the user determined jumps to alternative when can not meet user's actual need Business demand, is pushed to user by alternative business demand.
The embodiment of the present invention determines the computational methods of each non-targeted word and the degree of association of target word, improves The computational accuracy of the degree of association, further increases the accuracy of identification to user's request in voice messaging identification.
The structure chart of the speech recognition equipment that Fig. 5 provides for the embodiment of the present invention.The embodiment of the present invention provides Speech recognition equipment can perform audio recognition method embodiment provide handling process, as it is shown in figure 5, Speech recognition equipment 40 includes that word segmentation processing module 41, target word determine that module 42, incidence relation determine Module 43, calculation of relationship degree module 44, business demand determine module 45, wherein, word segmentation processing module 41 For obtaining multiple participles of target text corresponding to voice messaging;Target word determines that module 42 is for determining Target word in the plurality of participle;Incidence relation determines that module 43 is for the business according to described target word Attribute, determines each non-targeted word and the incidence relation calculation of relationship degree of described target word in the plurality of participle Module 44, for the incidence relation according to described each non-targeted word Yu described target word, determines described each non-mesh Mark word and the degree of association of described target word;Business demand determines that module 45 is for according to described each non-targeted word With the degree of association of described target word, determine the business demand of user.
The embodiment of the present invention, by analyzing participle corresponding to voice messaging, determines the target word in participle and non- Target word, determines the incidence relation of each non-targeted word and target word, foundation according to the service attribute of target word The incidence relation of each non-targeted word and target word determines the degree of association of each non-targeted word and target word, thus really Determining the degree of association between service attribute and user's request, this degree of association reflects that user expects to obtain this target The intensity of the business tine that this service attribute of word is corresponding, different service attributes and user's request it Between the degree of association different, can determine that, by degree of association size, the business demand that user is real, compared to list Pure determined the target service that voice messaging is corresponding by core word, or simple from the discernible participle of each business The size of number determine the target service that voice messaging is corresponding, improve user in analysis voice messaging needs The accuracy asked.
The structure chart of the speech recognition equipment that Fig. 6 provides for another embodiment of the present invention.At above-described embodiment On the basis of, incidence relation determine module 43 specifically for determine non-targeted word on the left of described target word with The incidence relation of described target word, and/or, determine the non-targeted word on the right side of described target word and described target The incidence relation of word.
Or, incidence relation determines that module 43 includes that query unit 431 and incidence relation determine unit 432, Wherein, described query unit 431 is for according to non-targeted word each in the plurality of participle and described target word Inquiry relation table, described relation table includes the dependence with reference to non-targeted word with described target word;Described Incidence relation determines the unit 432 reference non-targeted word in described non-targeted word with described relation table Timing, determines that described non-targeted word and the incidence relation of described target word are described reference non-targeted word and institute State the dependence of target word.
Optionally, calculation of relationship degree module 44 includes that the first score value determines unit 441 and computing unit 442, Wherein, described first score value determines that unit 441 is for according to described each non-targeted word and described target word Incidence relation determines the first score value that described incidence relation is corresponding;Described computing unit 442, for foundation Described first score value calculates the degree of association of described each non-targeted word and described target word.
Preferably, described first score value includes factor I and factor Ⅱ;Described computing unit 442 has Body is for being multiplied by described target word by factor I corresponding for first non-targeted word in described target text Second score value obtains the first intermediate value, the second score value of described target word according to the length of described target word, Described target word determines with the degree of dependence of described non-targeted word;By second non-mesh in described target text The factor I that mark word is corresponding is multiplied by described first intermediate value and obtains the second intermediate value;Travel through described second Non-targeted word i-th non-targeted word below, 3≤i≤N, by corresponding for described i-th non-targeted word first The factor is multiplied by the i-th-1 intermediate value and obtains the i-th intermediate value, and N represents last non-mesh in described target text The sequence number of mark word;It is averaging acquisition average after being added by factor Ⅱ corresponding respectively for described each non-targeted word Value;Described each non-targeted word is equal to N intermediate value and described meansigma methods with the degree of association of described target word Product.
The speech recognition equipment that the embodiment of the present invention provides can be specifically for performing above-mentioned Fig. 1,2,3,4 The embodiment of the method provided, here is omitted for concrete function.
The embodiment of the present invention starts the incidence relation determining non-targeted word with target word from the left side of target word, And/or, start the incidence relation determining non-targeted word with target word from the right side of target word, it is ensured that each non- The order of the incidence relation of target word and target word;Each non-targeted word and mesh is determined by inquiry relation table The incidence relation of mark word, improves the search efficiency of incidence relation;Determine each non-targeted word and target word The computational methods of the degree of association, improve the computational accuracy of the degree of association, further increase voice messaging and know Accuracy of identification to user's request in not.
In sum, the participle that the embodiment of the present invention is corresponding by analyzing voice messaging, determine in participle Target word and non-targeted word, determine associating of each non-targeted word and target word according to the service attribute of target word Relation, determines associating of each non-targeted word and target word according to each non-targeted word with the incidence relation of target word Degree, so that it is determined that the degree of association between service attribute and user's request, this degree of association reflects that user expects Obtain the intensity of business tine corresponding to this service attribute of this target word, different service attributes with The degree of association between user's request is different, can determine that, by degree of association size, the business demand that user is real, Compared to by core word determining target service that voice messaging corresponding merely, or can know from each business merely The size of the number of other participle determines the target service that voice messaging is corresponding, improves analysis voice messaging The accuracy of middle user's request;Start to determine associating of non-targeted word and target word from the left side of target word System, and/or, the incidence relation determining non-targeted word with target word is started from the right side of target word, it is ensured that The order of the incidence relation of each non-targeted word and target word;Each non-targeted word is determined by inquiry relation table With the incidence relation of target word, improve the search efficiency of incidence relation;Determine each non-targeted word and mesh The computational methods of the degree of association of mark word, improve the computational accuracy of the degree of association, further increase voice letter Cease the accuracy of identification to user's request in identifying.
In several embodiments provided by the present invention, it should be understood that disclosed apparatus and method, Can realize by another way.Such as, device embodiment described above is only schematically, Such as, the division of described unit, it is only a kind of logic function and divides, actual can have additionally when realizing Dividing mode, the most multiple unit or assembly can in conjunction with or be desirably integrated into another system, or Some features can be ignored, or does not performs.Another point, shown or discussed coupling each other or Direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication link Connect, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, makees The parts shown for unit can be or may not be physical location, i.e. may be located at a place, Or can also be distributed on multiple NE.Can select according to the actual needs part therein or The whole unit of person realizes the purpose of the present embodiment scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, Can also be that unit is individually physically present, it is also possible to two or more unit are integrated in a list In unit.Above-mentioned integrated unit both can realize to use the form of hardware, it would however also be possible to employ hardware adds software The form of functional unit realizes.
The above-mentioned integrated unit realized with the form of SFU software functional unit, can be stored in a computer In read/write memory medium.Above-mentioned SFU software functional unit is stored in a storage medium, including some fingers Make with so that a computer equipment (can be personal computer, server, or the network equipment etc.) Or processor (processor) performs the part steps of method described in each embodiment of the present invention.And it is aforementioned Storage medium include: USB flash disk, portable hard drive, read only memory (Read-Only Memory, ROM), Random access memory (Random Access Memory, RAM), magnetic disc or CD etc. are various permissible The medium of storage program code.
Those skilled in the art are it can be understood that arrive, for convenience and simplicity of description, only with above-mentioned respectively The division of functional module is illustrated, and in actual application, can above-mentioned functions be divided as desired Join and completed by different functional modules, the internal structure of device will be divided into different functional modules, with Complete all or part of function described above.The specific works process of the device of foregoing description is permissible With reference to the corresponding process in preceding method embodiment, do not repeat them here.
Last it is noted that various embodiments above is only in order to illustrate technical scheme, rather than right It limits;Although the present invention being described in detail with reference to foregoing embodiments, this area common Skilled artisans appreciate that the technical scheme described in foregoing embodiments still can be modified by it, Or the most some or all of technical characteristic is carried out equivalent;And these amendments or replacement, and The essence not making appropriate technical solution departs from the scope of various embodiments of the present invention technical scheme.

Claims (10)

1. an audio recognition method, it is characterised in that including:
Obtain multiple participles of target text corresponding to voice messaging;
Determine the target word in the plurality of participle;
According to the service attribute of described target word, determine each non-targeted word and described mesh in the plurality of participle The incidence relation of mark word;
According to the incidence relation of described each non-targeted word Yu described target word, determine described each non-targeted word with The degree of association of described target word;
According to the degree of association of described each non-targeted word Yu described target word, determine the business demand of user.
Method the most according to claim 1, it is characterised in that described determine in the plurality of participle Each non-targeted word and the incidence relation of described target word, including:
Determine the incidence relation of the non-targeted word on the left of described target word and described target word, and/or, determine Non-targeted word on the right side of described target word and the incidence relation of described target word.
Method the most according to claim 1, it is characterised in that described determine in the plurality of participle Each non-targeted word and the incidence relation of described target word, including:
Relation table, described relation table is inquired about with described target word according to non-targeted word each in the plurality of participle Including the dependence with reference to non-targeted word with described target word;
If described non-targeted word mates with the reference non-targeted word in described relation table, the most described non-targeted word It is the described dependence with reference to non-targeted word with described target word with the incidence relation of described target word.
4. according to the method described in any one of claim 1-3, it is characterised in that described according to described respectively Non-targeted word and the incidence relation of described target word, determine the pass of described each non-targeted word and described target word Connection degree, including:
Determine that described incidence relation is corresponding according to described each non-targeted word with the incidence relation of described target word First score value;
The degree of association of described each non-targeted word and described target word is calculated according to described first score value.
Method the most according to claim 4, it is characterised in that described first score value include first because of Son and factor Ⅱ;
The described degree of association calculating described each non-targeted word and described target word according to described first score value, bag Include:
Factor I corresponding for first non-targeted word in described target text is multiplied by the of described target word Two score values obtain the first intermediate value, and the second score value of described target word is according to the length of described target word, institute The degree of dependence stating target word and described non-targeted word determines;
Factor I corresponding for second non-targeted word in described target text is multiplied by described first intermediate value Obtain the second intermediate value;
Traveling through described second non-targeted word i-th non-targeted word below, 3≤i≤N, by described i-th The factor I that non-targeted word is corresponding is multiplied by the i-th-1 intermediate value and obtains the i-th intermediate value, and N represents described target The sequence number of last non-targeted word in text;
It is averaging acquisition meansigma methods after being added by factor Ⅱ corresponding respectively for described each non-targeted word;
Described each non-targeted word is equal to N intermediate value and described meansigma methods with the degree of association of described target word Product.
6. a speech recognition equipment, it is characterised in that including:
Word segmentation processing module, for obtaining multiple participles of target text corresponding to voice messaging;
Target word determines module, for determining the target word in the plurality of participle;
Incidence relation determines module, for the service attribute according to described target word, determines the plurality of point Each non-targeted word and the incidence relation of described target word in word;
Calculation of relationship degree module, for the incidence relation according to described each non-targeted word Yu described target word, Determine the degree of association of described each non-targeted word and described target word;
Business demand determines module, for the degree of association according to described each non-targeted word Yu described target word, Determine the business demand of user.
Speech recognition equipment the most according to claim 6, it is characterised in that described incidence relation is true Cover half block specifically for determining the incidence relation of non-targeted word on the left of described target word and described target word, And/or, determine the incidence relation of the non-targeted word on the right side of described target word and described target word.
Speech recognition equipment the most according to claim 7, it is characterised in that described incidence relation is true Cover half block includes that query unit and incidence relation determine unit, and wherein, described query unit is for according to institute Stating each non-targeted word in multiple participle and inquire about relation table with described target word, described relation table includes with reference to non- Target word and the dependence of described target word;Described incidence relation determines that unit is for described non-targeted word When mating with the reference non-targeted word in described relation table, determine described non-targeted word and described target word Incidence relation is the described dependence with reference to non-targeted word with described target word.
9. according to the speech recognition equipment described in any one of claim 6-8, it is characterised in that described pass Connection degree computing module includes that the first score value determines unit and computing unit, and wherein, described first score value determines Unit is for determining described incidence relation pair according to the incidence relation of described each non-targeted word Yu described target word The first score value answered;Described computing unit for according to described first score value calculate described each non-targeted word with The degree of association of described target word.
Speech recognition equipment the most according to claim 9, it is characterised in that described first score value Including factor I and factor Ⅱ;Described computing unit is specifically for by described target text first The factor I that non-targeted word is corresponding is multiplied by the second score value of described target word and obtains the first intermediate value, described Second score value of target word depends on described non-targeted word according to the length of described target word, described target word Bad degree determines;
Factor I corresponding for second non-targeted word in described target text is multiplied by described first intermediate value Obtain the second intermediate value;
Traveling through described second non-targeted word i-th non-targeted word below, 3≤i≤N, by described i-th The factor I that non-targeted word is corresponding is multiplied by the i-th-1 intermediate value and obtains the i-th intermediate value, and N represents described target The sequence number of last non-targeted word in text;
It is averaging acquisition meansigma methods after being added by factor Ⅱ corresponding respectively for described each non-targeted word;
Described each non-targeted word is equal to N intermediate value and described meansigma methods with the degree of association of described target word Product.
CN201610203599.8A 2016-03-31 2016-03-31 Audio recognition method and device Active CN105893351B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610203599.8A CN105893351B (en) 2016-03-31 2016-03-31 Audio recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610203599.8A CN105893351B (en) 2016-03-31 2016-03-31 Audio recognition method and device

Publications (2)

Publication Number Publication Date
CN105893351A true CN105893351A (en) 2016-08-24
CN105893351B CN105893351B (en) 2019-08-20

Family

ID=57012754

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610203599.8A Active CN105893351B (en) 2016-03-31 2016-03-31 Audio recognition method and device

Country Status (1)

Country Link
CN (1) CN105893351B (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106911706A (en) * 2017-03-13 2017-06-30 北京小米移动软件有限公司 call background adding method and device
CN107180027A (en) * 2017-05-17 2017-09-19 海信集团有限公司 Voice command business sorting technique and device
CN107527619A (en) * 2017-08-29 2017-12-29 海信集团有限公司 The localization method and device of Voice command business
CN108121721A (en) * 2016-11-28 2018-06-05 渡鸦科技(北京)有限责任公司 Intension recognizing method and device
CN108536414A (en) * 2017-03-06 2018-09-14 腾讯科技(深圳)有限公司 Method of speech processing, device and system, mobile terminal
CN116204568A (en) * 2023-05-04 2023-06-02 华能信息技术有限公司 Data mining analysis method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120078631A1 (en) * 2010-09-26 2012-03-29 Alibaba Group Holding Limited Recognition of target words using designated characteristic values
CN104102658A (en) * 2013-04-09 2014-10-15 腾讯科技(深圳)有限公司 Method and device for mining text contents
CN104317783A (en) * 2014-09-16 2015-01-28 北京航空航天大学 SRC calculation method
CN104866511A (en) * 2014-02-26 2015-08-26 华为技术有限公司 Method and equipment for adding multi-media files

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120078631A1 (en) * 2010-09-26 2012-03-29 Alibaba Group Holding Limited Recognition of target words using designated characteristic values
CN104102658A (en) * 2013-04-09 2014-10-15 腾讯科技(深圳)有限公司 Method and device for mining text contents
CN104866511A (en) * 2014-02-26 2015-08-26 华为技术有限公司 Method and equipment for adding multi-media files
CN104317783A (en) * 2014-09-16 2015-01-28 北京航空航天大学 SRC calculation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
P. VENKETESH 等: "Graph based Prediction Model to Improve Web Prefetching", 《INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121721A (en) * 2016-11-28 2018-06-05 渡鸦科技(北京)有限责任公司 Intension recognizing method and device
CN108536414A (en) * 2017-03-06 2018-09-14 腾讯科技(深圳)有限公司 Method of speech processing, device and system, mobile terminal
CN108536414B (en) * 2017-03-06 2021-10-22 腾讯科技(深圳)有限公司 Voice processing method, device and system and mobile terminal
CN106911706A (en) * 2017-03-13 2017-06-30 北京小米移动软件有限公司 call background adding method and device
CN107180027A (en) * 2017-05-17 2017-09-19 海信集团有限公司 Voice command business sorting technique and device
CN107527619A (en) * 2017-08-29 2017-12-29 海信集团有限公司 The localization method and device of Voice command business
CN107527619B (en) * 2017-08-29 2021-01-05 海信集团有限公司 Method and device for positioning voice control service
CN116204568A (en) * 2023-05-04 2023-06-02 华能信息技术有限公司 Data mining analysis method
CN116204568B (en) * 2023-05-04 2023-10-03 华能信息技术有限公司 Data mining analysis method

Also Published As

Publication number Publication date
CN105893351B (en) 2019-08-20

Similar Documents

Publication Publication Date Title
CN105893351A (en) Speech recognition method and device
EP3648099B1 (en) Voice recognition method, device, apparatus, and storage medium
CN106503184B (en) Determine the method and device of the affiliated class of service of target text
CN103956169A (en) Speech input method, device and system
CN108920649B (en) Information recommendation method, device, equipment and medium
CN110321562B (en) Short text matching method and device based on BERT
CN103268313A (en) Method and device for semantic analysis of natural language
CN110415679A (en) Voice error correction method, device, equipment and storage medium
CN109710732B (en) Information query method, device, storage medium and electronic equipment
AU2017216520A1 (en) Common data repository for improving transactional efficiencies of user interactions with a computing device
CN105956053A (en) Network information-based search method and apparatus
JP2023076413A (en) Method, computer device, and computer program for providing dialogue dedicated to domain by using language model
CN108538294A (en) A kind of voice interactive method and device
CN107615270A (en) A kind of man-machine interaction method and its device
CN104199956A (en) Method for searching erp (enterprise resource planning) data voice
CN111198936B (en) Voice search method and device, electronic equipment and storage medium
CN110457454A (en) A kind of dialogue method, server, conversational system and storage medium
CN116882372A (en) Text generation method, device, electronic equipment and storage medium
CN114003682A (en) Text classification method, device, equipment and storage medium
CN112100339A (en) User intention recognition method and device for intelligent voice robot and electronic equipment
CN110580255A (en) method and system for storing and retrieving data
CN105808688B (en) Complementary retrieval method and device based on artificial intelligence
KR102053419B1 (en) Method, apparauts and system for named entity linking and computer program thereof
US9959307B2 (en) Automatic question sorting
CN109684357B (en) Information processing method and device, storage medium and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant