CN110349568A - Speech retrieval method, apparatus, computer equipment and storage medium - Google Patents

Speech retrieval method, apparatus, computer equipment and storage medium Download PDF

Info

Publication number
CN110349568A
CN110349568A CN201910492599.8A CN201910492599A CN110349568A CN 110349568 A CN110349568 A CN 110349568A CN 201910492599 A CN201910492599 A CN 201910492599A CN 110349568 A CN110349568 A CN 110349568A
Authority
CN
China
Prior art keywords
corpus
model
gram model
result
speech
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910492599.8A
Other languages
Chinese (zh)
Inventor
黄锦伦
陈磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910492599.8A priority Critical patent/CN110349568A/en
Publication of CN110349568A publication Critical patent/CN110349568A/en
Priority to PCT/CN2019/117872 priority patent/WO2020244150A1/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/63Querying
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/60Information retrieval; Database structures therefor; File system structures therefor of audio data
    • G06F16/68Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/20Natural language analysis
    • G06F40/253Grammatical analysis; Style critique
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/063Training
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/183Speech classification or search using natural language modelling using context dependencies, e.g. language models
    • G10L15/19Grammatical context, e.g. disambiguation of the recognition hypotheses based on word sequence rules
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/225Feedback of the input speech

Abstract

The invention discloses speech retrieval method, apparatus, computer equipment and storage mediums.This method comprises: receiving training set corpus, training set corpus is input to initial N-gram model and is trained, N-gram model is obtained;Voice to be identified is received, voice to be identified identify by N-gram model, obtains recognition result;Recognition result is segmented, sentence word segmentation result corresponding with recognition result is obtained;Morphological analysis is carried out according to sentence word segmentation result, obtains the corresponding noun part-of-speech keyword of sentence word segmentation result;And search exceeds the corpus of preset similarity threshold with the similarity of nominal keyword in pre-stored recommendation corpus, to obtain search result.This method uses speech recognition technology, by obtaining noun part-of-speech keyword after carrying out morphological analysis to the result of speech recognition, realizes and more accurately obtains search result in recommending corpus according to noun part-of-speech keyword.

Description

Speech retrieval method, apparatus, computer equipment and storage medium
Technical field
The present invention relates to technical field of voice recognition more particularly to a kind of speech retrieval method, apparatus, computer equipment and Storage medium.
Background technique
Currently, smart supermarket removes retrieval commodity by speech recognition, matching commodity are usually gone by fuzzy query, at this time It needs to analyze the result of speech recognition, intelligence obtains the Item Title that user needs to buy.User when in use, often Entire sentence can be said, such as: I will buy XXX, I will eat XXX etc., and current speech recognition system can not accuracy of judgement Its buying intention.
Summary of the invention
The embodiment of the invention provides a kind of speech retrieval method, apparatus, computer equipment and storage mediums, it is intended to solve Speech recognition system speech recognition accuracy under supermarket's scene is low in the prior art, leads to asking for recognition result inaccuracy Topic.
In a first aspect, the embodiment of the invention provides a kind of speech retrieval methods comprising:
Training set corpus is received, the training set corpus is input to initial N-gram model and is trained, is obtained N-gram model;Wherein, the N-gram model is N meta-model;
Voice to be identified is received, the voice to be identified identify by the N-gram model, is known Other result;
The recognition result is segmented, sentence word segmentation result corresponding with the recognition result is obtained;
Morphological analysis is carried out according to the sentence word segmentation result, the corresponding noun part-of-speech of the sentence word segmentation result is obtained and closes Keyword;And
Search is to the similarity of the nominal keyword beyond preset similar in pre-stored recommendation corpus The corpus of threshold value is spent, to obtain search result.
Second aspect, the embodiment of the invention provides a kind of speech retrieval devices comprising:
The training set corpus is input to initial N-gram for receiving training set corpus by model training unit Model is trained, and obtains N-gram model;Wherein, the N-gram model is N meta-model;
Voice recognition unit, for receiving voice to be identified, by the N-gram model to the voice to be identified into Row is identified, recognition result is obtained;
Participle unit obtains sentence participle corresponding with the recognition result for segmenting the recognition result As a result;
Part of speech analytical unit obtains the sentence participle knot for carrying out morphological analysis according to the sentence word segmentation result The corresponding noun part-of-speech keyword of fruit;And
Retrieval unit is super with the similarity of the nominal keyword for searching in pre-stored recommendation corpus The corpus of preset similarity threshold out, to obtain search result.
The third aspect, the embodiment of the present invention provide a kind of computer equipment again comprising memory, processor and storage On the memory and the computer program that can run on the processor, the processor execute the computer program Speech retrieval method described in the above-mentioned first aspect of Shi Shixian.
Fourth aspect, the embodiment of the invention also provides a kind of computer readable storage mediums, wherein the computer can It reads storage medium and is stored with computer program, it is above-mentioned that the computer program when being executed by a processor executes the processor Speech retrieval method described in first aspect.
The embodiment of the invention provides a kind of speech retrieval method, apparatus, computer equipment and storage mediums.This method packet It includes and receives training set corpus, the training set corpus is input to initial N-gram model and is trained, N-gram is obtained Model;Wherein, the N-gram model is N meta-model;Voice to be identified is received, by the N-gram model to described wait know Other voice identified, obtains recognition result;The recognition result is segmented, is obtained corresponding with the recognition result Sentence word segmentation result;Morphological analysis is carried out according to the sentence word segmentation result, obtains the corresponding name of the sentence word segmentation result Word part of speech keyword;And search and the similarity of the nominal keyword exceed in advance in pre-stored recommendation corpus If similarity threshold corpus, to obtain search result.This method uses speech recognition technology, passes through the knot to speech recognition Fruit carries out morphological analysis, realizes the accurate acquisition to user demand.
Detailed description of the invention
Technical solution in order to illustrate the embodiments of the present invention more clearly, below will be to needed in embodiment description Attached drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the invention, general for this field For logical technical staff, without creative efforts, it is also possible to obtain other drawings based on these drawings.
Fig. 1 is the application scenarios schematic diagram of speech retrieval method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of speech retrieval method provided in an embodiment of the present invention;
Fig. 3 is the sub-process schematic diagram of speech retrieval method provided in an embodiment of the present invention;
Fig. 4 is another sub-process schematic diagram of speech retrieval method provided in an embodiment of the present invention;
Fig. 5 is the schematic block diagram of speech retrieval device provided in an embodiment of the present invention;
Fig. 6 is the subelement schematic block diagram of speech retrieval device provided in an embodiment of the present invention;
Fig. 7 is another subelement schematic block diagram of speech retrieval device provided in an embodiment of the present invention;
Fig. 8 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
It should be appreciated that ought use in this specification and in the appended claims, term " includes " and "comprising" instruction Described feature, entirety, step, operation, the presence of element and/or component, but one or more of the other feature, whole is not precluded Body, step, operation, the presence or addition of element, component and/or its set.
It is also understood that mesh of the term used in this description of the invention merely for the sake of description specific embodiment And be not intended to limit the present invention.As description of the invention and it is used in the attached claims, unless on Other situations are hereafter clearly indicated, otherwise " one " of singular, "one" and "the" are intended to include plural form.
It will be further appreciated that the term "and/or" used in description of the invention and the appended claims is Refer to any combination and all possible combinations of one or more of associated item listed, and including these combinations.
Fig. 1 and Fig. 2 are please referred to, Fig. 1 is the application scenarios schematic diagram of speech retrieval method provided in an embodiment of the present invention, figure 2 be the flow diagram of speech retrieval method provided in an embodiment of the present invention, which is applied in server, should Method is executed by the application software being installed in server.
As shown in Fig. 2, the method comprising the steps of S110~S150.
S110, training set corpus is received, the training set corpus is input to initial N-gram model and is trained, Obtain N-gram model;Wherein, the N-gram model is N meta-model.
In the present embodiment, it is angle description technique scheme from server.It can receive training set corpus in server Library training obtains N-gram model, is uploaded to clothes to the front end voice collecting terminal that smart supermarket is arranged in by N-gram model The voice to be identified of business device is identified.
In the present embodiment, training set corpus is the mixing library of general corpus and consumer goods corpus, and consumer goods corpus is It include the corpus of shiploads of merchandise title (such as Brand, product name etc.);The difference of general corpus and consumer goods corpus Place is that the vocabulary in general corpus is not partial to a certain specific field.It is input to just by the training set corpus Beginning N-gram model is trained, and the N-gram model for speech recognition can be obtained.
In one embodiment, as shown in figure 3, step S110 includes:
S111, consumer goods corpus is obtained, the consumer goods corpus is input to the first initial N-gram model and is trained, Obtain the first N-gram model;
S112, general corpus is obtained, the general corpus is input to the second initial N-gram model and is trained, is obtained 2nd N-gram model;
S113, according to set Model Fusion ratio, by the first N-gram model and the 2nd N-gram mould Type is merged, and N-gram model is obtained.
In the present embodiment, consumer goods corpus is the corpus for including shiploads of merchandise title, general corpus and the consumer goods Corpus the difference is that, the vocabulary in general corpus is not partial to a certain specific field, but the vocabulary of every field All include.
N-gram model is a kind of language model (LanguageModel, LM), and language model is one and based on probability sentences Other model, its input are in short (generic sequence of word) that output is the probability of the words, i.e. the joint of these words is general Rate (jointprobability).
Assuming that sentence T is that have word sequence w1,w2,w3...wnComposition, it is as follows to be formulated N-Gram language model:
P (T)=P (w1)*p(w2)*p(w3)*…*p(wn)
=p (w1)*p(w2|w1)*p(w3|w1w2)*…*p(wn|w1w2w3...)
General common N-Gram model is Bi-Gram and Tri-Gram.It is formulated respectively as follows:
Bi-Gram:
P (T)=p (w1|begin)*p(w2|w1)*p(w3|w2)*…*p(wn|wn-1)
Tri-Gram:
P (T)=p (w1|begin1,begin2)*p(w2|w1,begin1)*p(w3|w2w1)*…*p(wn|wn-1,wn-2);
As it can be seen that the side of the statistical counting in corpus can be passed through for the conditional probability that each word in sentence T occurs Formula obtains.Then n meta-model is as follows:
p(wn|w1w2w3...)=C (wi-n-1..., wi)/C(wi-n-1..., wi-1);
C (w in formulai-n-1..., wi) indicate character string wi-n-1..., wiNumber in corpus.
According to set Model Fusion ratio, such as ratio setting of consumer goods corpus and general corpus is 2:8, obtains the The Model Fusion ratio of one N-gram model and the 2nd N-gram model is also 2:8, by the first N-gram model and The 2nd N-gram model is merged, and the N-gram model for speech recognition is finally obtained.Since initial setting up disappears Take the ratio of product corpus and general corpus, merges obtained N-gram model finally with the speech recognition under smart supermarket scene Accuracy rate is effectively improved.
In one embodiment, as shown in figure 4, step S111 includes:
S1111, the consumer goods corpus is segmented based on probability statistics participle model, is obtained and the consumer goods language Expect corresponding first participle result;
S1112, it the first participle result is input to the first initial N-gram model is trained, obtain the first N- Gram model.
In the present embodiment, by each sentence in the consumer goods corpus by being divided based on probability statistics participle model Word process is as follows:
For example, enabling C=C1C2...Cm, C is Chinese character string to be slit, to enable W=W1W2...Wn, W be cutting as a result, Wa, Wb ... ..., Wk are all possible cutting schemes of C.So, it is to find mesh based on probability statistics participle model Word string W so that W meets: P (W | C)=MAX (P (Wa | C), P (Wb | C) ... P (Wk | C)) participle model, above-mentioned participle mould The word string W i.e. estimated probability that type obtains is the word string of maximum.That is:
To a substring S to be segmented, according to sequence from left to right take out whole candidate word w1, w2 ..., wi ..., wn;The probability value P (wi) of each candidate word is found in dictionary, and the left adjacent word of the whole for recording each candidate word;It calculates each The accumulated probability of candidate word, while comparing the best left adjacent word for obtaining each candidate word;If current word wn is the tail of word string S Word, and accumulated probability P (wn) is maximum, then wn is exactly the terminal word of S;It, successively will be each according to sequence from right to left since wn The best left adjacent word output of word, the i.e. word segmentation result of S.Having obtained the first participle corresponding with the consumer goods corpus as a result, The first participle result is input to the first initial N-gram model to be trained, obtains the first N-gram model, this first N-gram model is higher to the sentence recognition accuracy under smart supermarket scene.
Likewise, by by the general corpus be based on probability statistics participle model segment, obtain with it is described general Corresponding second word segmentation result of corpus;Second word segmentation result is input to the second initial N-gram model to be trained, is obtained To the 2nd N-gram model, the 2nd N-gram model is higher (i.e. to the sentence recognition accuracy under daily life common scenarios The discrimination being not biased towards in the sentence to a certain living scene is higher).
S120, voice to be identified is received, the voice to be identified identify by the N-gram model, is obtained To recognition result.
When identify to the voice to be identified by the N-gram model, what is identified is a whole sentence Words, such as " I will buy XX brand instant noodles ", can effectively identify the voice to be identified by N-gram model, The maximum sentence of identification probability is obtained as recognition result.
S130, the recognition result is segmented, obtains sentence word segmentation result corresponding with the recognition result.
In one embodiment, step S130 includes:
The recognition result is based on probability statistics participle model to segment, obtains language corresponding with the recognition result Sentence word segmentation result.
It in the present embodiment, is also using based on probability statistics point when being segmented in step S130 to the recognition result Word model, which carries out participle detailed process, can refer to step S1111.After being segmented recognition result, can further it carry out Part of speech analysis.
S140, morphological analysis is carried out according to the sentence word segmentation result, obtains the corresponding noun of the sentence word segmentation result Part of speech keyword.
In one embodiment, step S140 includes:
Using the sentence word segmentation result as the input of joint morphological analysis model trained in advance, the sentence point is obtained Noun part-of-speech keyword in word result.
In the present embodiment, the process for carrying out morphological analysis by joint morphological analysis model is as follows:
The input of morphological analysis task is a character string (referring to it using " sentence " below), and exporting is in sentence Word boundary and part of speech, entity class.Sequence labelling is the classical modeling pattern of morphological analysis.In building joint morphological analysis mould Type (i.e. LAC model) is accessed the feature learnt using the network structure learning characteristic for being based on GRU (gating cycle unit) CRF decoding layer (CRF, that is, condition random field) completes sequence labelling.CRF decoding layer is substantially by the linear model in traditional CRF Nonlinear neural network is changed into, the likelihood probability based on sentence level, it is thus possible to preferably solve the problems, such as marking bias.
The input of joint morphological analysis model indicates that each word indicates one-hot sequence with an id using one-hot mode Column are converted to the word sequence vector that real vector indicates by word table;Input of the word sequence vector as two-way GRU, study input The character representation of sequence, obtaining new characteristic indicates sequence, wherein being stacked two layers of two-way GRU to increase learning ability;CRF with The feature that GRU learns is input, using flag sequence as supervisory signals, is realized to the part of speech mark respectively segmented in sentence word segmentation result Note.Since under the scene of smart supermarket, name part of speech keyword is bigger for the probability of Brand or product name, therefore choose The corresponding noun part-of-speech keyword of the sentence word segmentation result is as the selection result, with the retrieval of further progress commodity.
S150, in pre-stored recommendation corpus search exceed with the similarity of the nominal keyword it is preset The corpus of similarity threshold, to obtain search result.
In the present embodiment, when obtaining noun part-of-speech keyword, to each noun part-of-speech in preset recommendation corpus Keyword scans for, obtain with the biggish word of the word part of speech keyword degree of approximation, using as search result.Wherein, default Recommendation corpus in each noun part-of-speech keyword is scanned for, obtain be with the biggish word of the word part of speech keyword degree of approximation, Specifically obtained according to Word2Vec model (Word2Vec model is a efficient tool that word is characterized as to real number value vector) The corresponding term vector of the nominal keyword, then term vector corresponding with corpus each in pre-stored recommendation corpus Carry out similarity calculating, wherein calculate two vectors between similarity be by calculate two vectors between it is European away from From.If existing in pre-stored recommendation corpus and exceeding preset similarity threshold with the similarity of the nominal keyword The corpus of value, using corresponding corpus as one of search result, i.e., it is multiple meet it is super with the similarity of the nominal keyword The corpus of preset similarity threshold collectively constitutes search result out.
This method uses speech recognition technology, by obtaining noun part-of-speech after carrying out morphological analysis to the result of speech recognition Keyword is realized and more accurately obtains search result in recommending corpus according to noun part-of-speech keyword.
The embodiment of the present invention also provides a kind of speech retrieval device, and the speech retrieval device is for executing aforementioned voice retrieval Any embodiment of method.Specifically, referring to Fig. 5, Fig. 5 is the schematic of speech retrieval device provided in an embodiment of the present invention Block diagram.The speech retrieval device 100 can be configured in server.
As shown in figure 5, speech retrieval device 100 includes model training unit 110, voice recognition unit 120, participle unit 130, part of speech analytical unit 140, retrieval unit 150.
The training set corpus is input to initial N- for receiving training set corpus by model training unit 110 Gram model is trained, and obtains N-gram model;Wherein, the N-gram model is N meta-model.
In the present embodiment, it is angle description technique scheme from server.It can receive training set corpus in server Library training obtains N-gram model, is uploaded to clothes to the front end voice collecting terminal that smart supermarket is arranged in by N-gram model The voice to be identified of business device is identified.
In the present embodiment, training set corpus is the mixing library of general corpus and consumer goods corpus, and consumer goods corpus is It include the corpus of shiploads of merchandise title (such as Brand, product name etc.);The difference of general corpus and consumer goods corpus Place is that the vocabulary in general corpus is not partial to a certain specific field.It is input to just by the training set corpus Beginning N-gram model is trained, and the N-gram model for speech recognition can be obtained.
In one embodiment, as shown in fig. 6, model training unit 110 includes:
The consumer goods corpus is input to the first initial N- for obtaining consumer goods corpus by the first training unit 111 Gram model is trained, and obtains the first N-gram model;
The general corpus is input to the second initial N-gram mould for obtaining general corpus by the second training unit 112 Type is trained, and obtains the 2nd N-gram model;
Model Fusion unit 113, for according to set Model Fusion ratio, by the first N-gram model and institute It states the 2nd N-gram model to be merged, obtains N-gram model.
In the present embodiment, consumer goods corpus is the corpus for including shiploads of merchandise title, general corpus and the consumer goods Corpus the difference is that, the vocabulary in general corpus is not partial to a certain specific field.
N-gram model is a kind of language model (LanguageModel, LM), and language model is one and based on probability sentences Other model, its input are in short (generic sequence of word) that output is the probability of the words, i.e. the joint of these words is general Rate (jointprobability).
Assuming that sentence T is that have word sequence w1,w2,w3...wnComposition, it is as follows to be formulated N-Gram language model:
P (T)=P (w1)*p(w2)*p(w3)*…*p(wn)
=p (w1)*p(w2|w1)*p(w3|w1w2)*…*p(wn|w1w2w3...)
General common N-Gram model is Bi-Gram and Tri-Gram.It is formulated respectively as follows:
Bi-Gram:
P (T)=p (w1|begin)*p(w2|w1)*p(w3|w2)*…*p(wn|wn-1)
Tri-Gram:
P (T)=p (w1|begin1,begin2)*p(w2|w1,begin1)*p(w3|w2w1)*…*p(wn|wn-1,wn-2);
As it can be seen that the side of the statistical counting in corpus can be passed through for the conditional probability that each word in sentence T occurs Formula obtains.Then n meta-model is as follows:
p(wn|w1w2w3...)=C (wi-n-1..., wi)/C(wi-n-1..., wi-1);
C (w in formulai-n-1..., wi) indicate character string wi-n-1..., wiNumber in corpus.
According to set Model Fusion ratio, such as ratio setting of consumer goods corpus and general corpus is 2:8, obtains the The Model Fusion ratio of one N-gram model and the 2nd N-gram model is also 2:8, by the first N-gram model and The 2nd N-gram model is merged, and the N-gram model for speech recognition is finally obtained.Since initial setting up disappears Take the ratio of product corpus and general corpus, merges obtained N-gram model finally with the speech recognition under smart supermarket scene Accuracy rate is effectively improved.
In one embodiment, as shown in fig. 7, the first training unit 111 includes:
Participle unit 1111, for by the consumer goods corpus be based on probability statistics participle model segment, obtain with The corresponding first participle result of the consumer goods corpus;
Training unit 1112 is segmented, is instructed for the first participle result to be input to the first initial N-gram model Practice, obtains the first N-gram model.
In the present embodiment, by each sentence in the consumer goods corpus by being divided based on probability statistics participle model Word process is as follows:
For example, enabling C=C1C2...Cm, C is Chinese character string to be slit, to enable W=W1W2...Wn, W be cutting as a result, Wa, Wb ... ..., Wk are all possible cutting schemes of C.So, it is to find mesh based on probability statistics participle model Word string W so that W meets: P (W | C)=MAX (P (Wa | C), P (Wb | C) ... P (Wk | C)) participle model, above-mentioned participle mould The word string W i.e. estimated probability that type obtains is the word string of maximum.That is:
To a substring S to be segmented, according to sequence from left to right take out whole candidate word w1, w2 ..., wi ..., wn;The probability value P (wi) of each candidate word is found in dictionary, and the left adjacent word of the whole for recording each candidate word;It calculates each The accumulated probability of candidate word, while comparing the best left adjacent word for obtaining each candidate word;If current word wn is the tail of word string S Word, and accumulated probability P (wn) is maximum, then wn is exactly the terminal word of S;It, successively will be each according to sequence from right to left since wn The best left adjacent word output of word, the i.e. word segmentation result of S.Having obtained the first participle corresponding with the consumer goods corpus as a result, The first participle result is input to the first initial N-gram model to be trained, obtains the first N-gram model, this first N-gram model is higher to the sentence recognition accuracy under smart supermarket scene.
Likewise, by by the general corpus be based on probability statistics participle model segment, obtain with it is described general Corresponding second word segmentation result of corpus;Second word segmentation result is input to the second initial N-gram model to be trained, is obtained To the 2nd N-gram model, the 2nd N-gram model is higher (i.e. to the sentence recognition accuracy under daily life common scenarios The discrimination being not biased towards in the sentence to a certain living scene is higher).
Voice recognition unit 120, for receiving voice to be identified, by the N-gram model to the voice to be identified Identified, obtains recognition result.
When identify to the voice to be identified by the N-gram model, what is identified is a whole sentence Words, such as " I will buy XX brand instant noodles ", can effectively identify the voice to be identified by N-gram model, The maximum sentence of identification probability is obtained as recognition result.
Recognition result participle unit 130 obtains corresponding with the recognition result for segmenting the recognition result Sentence word segmentation result.
In one embodiment, recognition result participle unit 130 is also used to:
The recognition result is based on probability statistics participle model to segment, obtains language corresponding with the recognition result Sentence word segmentation result.
It in the present embodiment, is also using base when being segmented in recognition result participle unit 130 to the recognition result Carrying out participle detailed process in probability statistics participle model can refer to participle unit 1111.After recognition result is segmented, Part of speech analysis can further be carried out.
Part of speech analytical unit 140 obtains the sentence participle for carrying out morphological analysis according to the sentence word segmentation result As a result corresponding noun part-of-speech keyword.
In one embodiment, part of speech analytical unit 140 is also used to:
Using the sentence word segmentation result as the input of joint morphological analysis model trained in advance, the sentence point is obtained Noun part-of-speech keyword in word result.
In the present embodiment, the process for carrying out morphological analysis by joint morphological analysis model is as follows:
The input of morphological analysis task is a character string (referring to it using " sentence " below), and exporting is in sentence Word boundary and part of speech, entity class.Sequence labelling is the classical modeling pattern of morphological analysis.In building joint morphological analysis mould Type (i.e. LAC model) is accessed the feature learnt using the network structure learning characteristic for being based on GRU (gating cycle unit) CRF decoding layer (CRF, that is, condition random field) completes sequence labelling.CRF decoding layer is substantially by the linear model in traditional CRF Nonlinear neural network is changed into, the likelihood probability based on sentence level, it is thus possible to preferably solve the problems, such as marking bias.
The input of joint morphological analysis model indicates that each word indicates one-hot sequence with an id using one-hot mode Column are converted to the word sequence vector that real vector indicates by word table;Input of the word sequence vector as two-way GRU, study input The character representation of sequence, obtaining new characteristic indicates sequence, wherein being stacked two layers of two-way GRU to increase learning ability;CRF with The feature that GRU learns is input, using flag sequence as supervisory signals, is realized to the part of speech mark respectively segmented in sentence word segmentation result Note.Since under the scene of smart supermarket, name part of speech keyword is bigger for the probability of Brand or product name, therefore choose The corresponding noun part-of-speech keyword of the sentence word segmentation result is as the selection result, with the retrieval of further progress commodity.
Retrieval unit 150, it is similar to the nominal keyword for being searched in pre-stored recommendation corpus Degree exceeds the corpus of preset similarity threshold, to obtain search result.
In the present embodiment, when obtaining noun part-of-speech keyword, to each noun part-of-speech in preset recommendation corpus Keyword scans for, obtain with the biggish word of the word part of speech keyword degree of approximation, using as search result.Wherein, default Recommendation corpus in each noun part-of-speech keyword is scanned for, obtain be with the biggish word of the word part of speech keyword degree of approximation, Specifically obtained according to Word2Vec model (Word2Vec model is a efficient tool that word is characterized as to real number value vector) The corresponding term vector of the nominal keyword, then term vector corresponding with corpus each in pre-stored recommendation corpus Carry out similarity calculating, wherein calculate two vectors between similarity be by calculate two vectors between it is European away from From.If existing in pre-stored recommendation corpus and exceeding preset similarity threshold with the similarity of the nominal keyword The corpus of value, using corresponding corpus as one of search result, i.e., it is multiple meet it is super with the similarity of the nominal keyword The corpus of preset similarity threshold collectively constitutes search result out.
The device uses speech recognition technology, by obtaining noun part-of-speech after carrying out morphological analysis to the result of speech recognition Keyword is realized and more accurately obtains search result in recommending corpus according to noun part-of-speech keyword.
Above-mentioned speech retrieval device can be implemented as the form of computer program, which can be in such as Fig. 8 institute It is run in the computer equipment shown.
Referring to Fig. 8, Fig. 8 is the schematic block diagram of computer equipment provided in an embodiment of the present invention.The computer equipment 500 be server, and server can be independent server, is also possible to the server cluster of multiple server compositions.
Refering to Fig. 8, which includes processor 502, memory and the net connected by system bus 501 Network interface 505, wherein memory may include non-volatile memory medium 503 and built-in storage 504.
The non-volatile memory medium 503 can storage program area 5031 and computer program 5032.The computer program 5032 are performed, and processor 502 may make to execute speech retrieval method.
The processor 502 supports the operation of entire computer equipment 500 for providing calculating and control ability.
The built-in storage 504 provides environment for the operation of the computer program 5032 in non-volatile memory medium 503, should When computer program 5032 is executed by processor 502, processor 502 may make to execute speech retrieval method.
The network interface 505 is for carrying out network communication, such as the transmission of offer data information.Those skilled in the art can To understand, structure shown in Fig. 8, only the block diagram of part-structure relevant to the present invention program, is not constituted to this hair The restriction for the computer equipment 500 that bright scheme is applied thereon, specific computer equipment 500 may include than as shown in the figure More or fewer components perhaps combine certain components or with different component layouts.
Wherein, the processor 502 is for running computer program 5032 stored in memory, to realize following function Can: training set corpus is received, the training set corpus is input to initial N-gram model and is trained, N-gram is obtained Model;Wherein, the N-gram model is N meta-model;Voice to be identified is received, by the N-gram model to described wait know Other voice identified, obtains recognition result;The recognition result is segmented, is obtained corresponding with the recognition result Sentence word segmentation result;Morphological analysis is carried out according to the sentence word segmentation result, obtains the corresponding name of the sentence word segmentation result Word part of speech keyword;And search and the similarity of the nominal keyword exceed in advance in pre-stored recommendation corpus If similarity threshold corpus, to obtain search result.
In one embodiment, processor 502 is executing the reception training set corpus, and the training set corpus is defeated Enter to initial N-gram model and be trained, when obtaining the step of N-gram model, perform the following operations: obtaining consumer goods language Material, is input to the first initial N-gram model for the consumer goods corpus and is trained, obtain the first N-gram model;It obtains logical With corpus, the general corpus is input to the second initial N-gram model and is trained, the 2nd N-gram model is obtained;According to The first N-gram model and the 2nd N-gram model are merged, obtain N- by set Model Fusion ratio Gram model.
In one embodiment, the consumer goods corpus described is input to the first initial N-gram executing by processor 502 Model is trained, and when obtaining the step of the first N-gram model, is performed the following operations: the consumer goods corpus is based on probability Statistics participle model is segmented, and first participle result corresponding with the consumer goods corpus is obtained;By the first participle knot Fruit is input to the first initial N-gram model and is trained, and obtains the first N-gram model.
In one embodiment, processor 502 execute it is described the recognition result is segmented, obtain and the identification As a result it when the step of corresponding sentence word segmentation result, performs the following operations: the recognition result being based on probability statistics and segments mould Type is segmented, and sentence word segmentation result corresponding with the recognition result is obtained.
In one embodiment, processor 502 is described according to sentence word segmentation result progress morphological analysis in execution, obtains When the step of the corresponding noun part-of-speech keyword of the sentence word segmentation result, perform the following operations: by the sentence word segmentation result As the input of joint morphological analysis model trained in advance, noun part-of-speech keyword in the sentence word segmentation result is obtained.
It will be understood by those skilled in the art that the embodiment of computer equipment shown in Fig. 8 is not constituted to computer The restriction of equipment specific composition, in other embodiments, computer equipment may include components more more or fewer than diagram, or Person combines certain components or different component layouts.For example, in some embodiments, computer equipment can only include depositing Reservoir and processor, in such embodiments, the structure and function of memory and processor are consistent with embodiment illustrated in fig. 8, Details are not described herein.
It should be appreciated that in embodiments of the present invention, processor 502 can be central processing unit (Central ProcessingUnit, CPU), which can also be other general processors, digital signal processor (DigitalSignalProcessor, DSP), specific integrated circuit (ApplicationSpecificIntegrated Circuit, ASIC), ready-made programmable gate array (Field-ProgrammableGateArray, FPGA) or other can compile Journey logical device, discrete gate or transistor logic, discrete hardware components etc..Wherein, general processor can be micro- place Reason device or the processor are also possible to any conventional processor etc..
Computer readable storage medium is provided in another embodiment of the invention.The computer readable storage medium can be with For non-volatile computer readable storage medium.The computer-readable recording medium storage has computer program, wherein calculating Machine program performs the steps of when being executed by processor receives training set corpus, and the training set corpus is input to just Beginning N-gram model is trained, and obtains N-gram model;Wherein, the N-gram model is N meta-model;Receive language to be identified Sound identify to the voice to be identified, obtains recognition result by the N-gram model;By the recognition result It is segmented, obtains sentence word segmentation result corresponding with the recognition result;Morphology point is carried out according to the sentence word segmentation result Analysis, obtains the corresponding noun part-of-speech keyword of the sentence word segmentation result;And it is searched in pre-stored recommendation corpus Exceed the corpus of preset similarity threshold, with the similarity of the nominal keyword to obtain search result.
In one embodiment, the training set corpus is input to initial N-gram by the reception training set corpus Model is trained, and obtains N-gram model, comprising: obtains consumer goods corpus, the consumer goods corpus is input at the beginning of first Beginning N-gram model is trained, and obtains the first N-gram model;General corpus is obtained, the general corpus is input to second Initial N-gram model is trained, and obtains the 2nd N-gram model;According to set Model Fusion ratio, by described first N-gram model and the 2nd N-gram model are merged, and N-gram model is obtained.
In one embodiment, described the consumer goods corpus is input to the first initial N-gram model to be trained, it obtains To the first N-gram model, comprising: by the consumer goods corpus be based on probability statistics participle model segment, obtain with it is described The corresponding first participle result of consumer goods corpus;The first participle result is input to the first initial N-gram model to instruct Practice, obtains the first N-gram model.
In one embodiment, described to segment the recognition result, obtain sentence corresponding with the recognition result Word segmentation result, comprising: the recognition result is based on probability statistics participle model and is segmented, is obtained and the recognition result pair The sentence word segmentation result answered.
In one embodiment, described that morphological analysis is carried out according to the sentence word segmentation result, obtain the sentence participle knot The corresponding noun part-of-speech keyword of fruit, comprising: using the sentence word segmentation result as joint morphological analysis model trained in advance Input, obtain noun part-of-speech keyword in the sentence word segmentation result.
It is apparent to those skilled in the art that for convenience of description and succinctly, foregoing description is set The specific work process of standby, device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein. Those of ordinary skill in the art may be aware that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and algorithm Step can be realized with electronic hardware, computer software, or a combination of the two, in order to clearly demonstrate hardware and software Interchangeability generally describes each exemplary composition and step according to function in the above description.These functions are studied carefully Unexpectedly the specific application and design constraint depending on technical solution are implemented in hardware or software.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed The scope of the present invention.
In several embodiments provided by the present invention, it should be understood that disclosed unit and method, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only logical function partition, there may be another division manner in actual implementation, can also will be with the same function Unit set is at a unit, such as multiple units or components can be combined or can be integrated into another system or some Feature can be ignored, or not execute.In addition, shown or discussed mutual coupling, direct-coupling or communication connection can Be through some interfaces, the indirect coupling or communication connection of device or unit, be also possible to electricity, mechanical or other shapes Formula connection.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.Some or all of unit therein can be selected to realize the embodiment of the present invention according to the actual needs Purpose.
It, can also be in addition, the functional units in various embodiments of the present invention may be integrated into one processing unit It is that each unit physically exists alone, is also possible to two or more units and is integrated in one unit.It is above-mentioned integrated Unit both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product When, it can store in one storage medium.Based on this understanding, technical solution of the present invention is substantially in other words to existing The all or part of part or the technical solution that technology contributes can be embodied in the form of software products, should Computer software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be Personal computer, server or network equipment etc.) execute all or part of step of each embodiment the method for the present invention Suddenly.And storage medium above-mentioned include: USB flash disk, mobile hard disk, read-only memory (ROM, Read-OnlyMemory), magnetic disk or The various media that can store program code such as CD.
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in various equivalent modifications or replace It changes, these modifications or substitutions should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with right It is required that protection scope subject to.

Claims (10)

1. a kind of speech retrieval method characterized by comprising
Training set corpus is received, the training set corpus is input to initial N-gram model and is trained, N- is obtained Gram model;Wherein, the N-gram model is N meta-model;
Voice to be identified is received, the voice to be identified identify by the N-gram model, obtains identification knot Fruit;
The recognition result is segmented, sentence word segmentation result corresponding with the recognition result is obtained;
Morphological analysis is carried out according to the sentence word segmentation result, it is crucial to obtain the corresponding noun part-of-speech of the sentence word segmentation result Word;And
Search and the similarity of the nominal keyword exceed preset similarity threshold in pre-stored recommendation corpus The corpus of value, to obtain search result;Wherein, in the recommendation corpus include multiple corpus, each corpus include one or The keyword of multiple noun part-of-speech.
2. speech retrieval method according to claim 1, which is characterized in that the reception training set corpus, it will be described Training set corpus is input to initial N-gram model and is trained, and obtains N-gram model, comprising:
Consumer goods corpus is obtained, the consumer goods corpus is input to the first initial N-gram model and is trained, obtains first N-gram model;
General corpus is obtained, the general corpus is input to the second initial N-gram model and is trained, the 2nd N- is obtained Gram model;
According to set Model Fusion ratio, the first N-gram model and the 2nd N-gram model are melted It closes, obtains N-gram model.
3. speech retrieval method according to claim 2, which is characterized in that described that the consumer goods corpus is input to One initial N-gram model is trained, and obtains the first N-gram model, comprising:
The consumer goods corpus is based on probability statistics participle model to segment, obtains corresponding with the consumer goods corpus the One word segmentation result;
The first participle result is input to the first initial N-gram model to be trained, obtains the first N-gram model.
4. speech retrieval method according to claim 1, which is characterized in that it is described to segment the recognition result, Obtain sentence word segmentation result corresponding with the recognition result, comprising:
The recognition result is based on probability statistics participle model to segment, obtains sentence corresponding with the recognition result point Word result.
5. speech retrieval method according to claim 1, which is characterized in that described to be carried out according to the sentence word segmentation result Morphological analysis obtains the corresponding noun part-of-speech keyword of the sentence word segmentation result, comprising:
Using the sentence word segmentation result as the input of joint morphological analysis model trained in advance, the sentence participle knot is obtained Noun part-of-speech keyword in fruit.
6. a kind of speech retrieval device characterized by comprising
The training set corpus is input to initial N-gram model for receiving training set corpus by model training unit It is trained, obtains N-gram model;Wherein, the N-gram model is N meta-model;
Voice recognition unit, for receiving voice to be identified, by the N-gram model to the voice to be identified carry out into Row identification, obtains recognition result;
Recognition result participle unit obtains sentence corresponding with the recognition result for segmenting the recognition result Word segmentation result;
Part of speech analytical unit obtains the sentence word segmentation result pair for carrying out morphological analysis according to the sentence word segmentation result The noun part-of-speech keyword answered;And
Retrieval unit, for searching for the similarity of the nominal keyword in pre-stored recommendation corpus beyond pre- If similarity threshold corpus, to obtain search result.
7. speech retrieval device according to claim 6, which is characterized in that the model training unit, comprising:
The consumer goods corpus is input to the first initial N-gram model for obtaining consumer goods corpus by the first training unit It is trained, obtains the first N-gram model;
The general corpus is input to the second initial N-gram model and carried out by the second training unit for obtaining general corpus Training, obtains the 2nd N-gram model;
Model Fusion unit, for according to set Model Fusion ratio, by the first N-gram model and described second N-gram model is merged, and N-gram model is obtained.
8. speech retrieval device according to claim 7, which is characterized in that first training unit, comprising:
Participle unit is segmented for the consumer goods corpus to be based on probability statistics participle model, is obtained and the consumption The corresponding first participle result of product corpus;
Training unit is segmented, is trained, obtains for the first participle result to be input to the first initial N-gram model First N-gram model.
9. a kind of computer equipment, including memory, processor and it is stored on the memory and can be on the processor The computer program of operation, which is characterized in that the processor realizes such as claim 1 to 5 when executing the computer program Any one of described in speech retrieval method.
10. a kind of computer readable storage medium, which is characterized in that the computer-readable recording medium storage has computer journey Sequence, the computer program make the processor execute such as language described in any one of claim 1 to 5 when being executed by a processor Sound search method.
CN201910492599.8A 2019-06-06 2019-06-06 Speech retrieval method, apparatus, computer equipment and storage medium Pending CN110349568A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910492599.8A CN110349568A (en) 2019-06-06 2019-06-06 Speech retrieval method, apparatus, computer equipment and storage medium
PCT/CN2019/117872 WO2020244150A1 (en) 2019-06-06 2019-11-13 Speech retrieval method and apparatus, computer device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910492599.8A CN110349568A (en) 2019-06-06 2019-06-06 Speech retrieval method, apparatus, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110349568A true CN110349568A (en) 2019-10-18

Family

ID=68181598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910492599.8A Pending CN110349568A (en) 2019-06-06 2019-06-06 Speech retrieval method, apparatus, computer equipment and storage medium

Country Status (2)

Country Link
CN (1) CN110349568A (en)
WO (1) WO2020244150A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110825844A (en) * 2019-10-21 2020-02-21 拉扎斯网络科技(上海)有限公司 Voice retrieval method and device, readable storage medium and electronic equipment
CN111291195A (en) * 2020-01-21 2020-06-16 腾讯科技(深圳)有限公司 Data processing method, device, terminal and readable storage medium
CN111460257A (en) * 2020-03-27 2020-07-28 北京百度网讯科技有限公司 Thematic generation method and device, electronic equipment and storage medium
CN111783424A (en) * 2020-06-17 2020-10-16 泰康保险集团股份有限公司 Text clause dividing method and device
CN111862970A (en) * 2020-06-05 2020-10-30 珠海高凌信息科技股份有限公司 False propaganda treatment application method and device based on intelligent voice robot
WO2020244150A1 (en) * 2019-06-06 2020-12-10 平安科技(深圳)有限公司 Speech retrieval method and apparatus, computer device, and storage medium
CN112183114A (en) * 2020-08-10 2021-01-05 招联消费金融有限公司 Model training and semantic integrity recognition method and device
CN112381038A (en) * 2020-11-26 2021-02-19 中国船舶工业系统工程研究院 Image-based text recognition method, system and medium
CN112905869A (en) * 2021-03-26 2021-06-04 北京儒博科技有限公司 Adaptive training method and device for language model, storage medium and equipment
CN113256378A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for determining shopping demand of user
CN113256379A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for correlating shopping demands for commodities
CN114329225A (en) * 2022-01-24 2022-04-12 平安国际智慧城市科技股份有限公司 Search method, device, equipment and storage medium based on search statement

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115563394B (en) * 2022-11-24 2023-03-28 腾讯科技(深圳)有限公司 Search recall method, recall model training method, device and computer equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107154260A (en) * 2017-04-11 2017-09-12 北京智能管家科技有限公司 A kind of domain-adaptive audio recognition method and device
CN107204184A (en) * 2017-05-10 2017-09-26 平安科技(深圳)有限公司 Audio recognition method and system
CN108538286A (en) * 2017-03-02 2018-09-14 腾讯科技(深圳)有限公司 A kind of method and computer of speech recognition
CN108804414A (en) * 2018-05-04 2018-11-13 科沃斯商用机器人有限公司 Text modification method, device, smart machine and readable storage medium storing program for executing
CN109388743A (en) * 2017-08-11 2019-02-26 阿里巴巴集团控股有限公司 The determination method and apparatus of language model
CN109817217A (en) * 2019-01-17 2019-05-28 深圳壹账通智能科技有限公司 Self-service based on speech recognition peddles method, apparatus, equipment and medium

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105139239A (en) * 2014-05-27 2015-12-09 无锡韩光电器有限公司 Supermarket shopping system with voice query function
JP6353408B2 (en) * 2015-06-11 2018-07-04 日本電信電話株式会社 Language model adaptation device, language model adaptation method, and program
CN106875941B (en) * 2017-04-01 2020-02-18 彭楚奥 Voice semantic recognition method of service robot
CN107247759A (en) * 2017-05-31 2017-10-13 深圳正品创想科技有限公司 A kind of Method of Commodity Recommendation and device
CN109344830A (en) * 2018-08-17 2019-02-15 平安科技(深圳)有限公司 Sentence output, model training method, device, computer equipment and storage medium
CN109840323A (en) * 2018-12-14 2019-06-04 深圳壹账通智能科技有限公司 The voice recognition processing method and server of insurance products
CN110349568A (en) * 2019-06-06 2019-10-18 平安科技(深圳)有限公司 Speech retrieval method, apparatus, computer equipment and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108538286A (en) * 2017-03-02 2018-09-14 腾讯科技(深圳)有限公司 A kind of method and computer of speech recognition
CN107154260A (en) * 2017-04-11 2017-09-12 北京智能管家科技有限公司 A kind of domain-adaptive audio recognition method and device
CN107204184A (en) * 2017-05-10 2017-09-26 平安科技(深圳)有限公司 Audio recognition method and system
CN109388743A (en) * 2017-08-11 2019-02-26 阿里巴巴集团控股有限公司 The determination method and apparatus of language model
CN108804414A (en) * 2018-05-04 2018-11-13 科沃斯商用机器人有限公司 Text modification method, device, smart machine and readable storage medium storing program for executing
CN109817217A (en) * 2019-01-17 2019-05-28 深圳壹账通智能科技有限公司 Self-service based on speech recognition peddles method, apparatus, equipment and medium

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020244150A1 (en) * 2019-06-06 2020-12-10 平安科技(深圳)有限公司 Speech retrieval method and apparatus, computer device, and storage medium
CN110825844A (en) * 2019-10-21 2020-02-21 拉扎斯网络科技(上海)有限公司 Voice retrieval method and device, readable storage medium and electronic equipment
CN111291195A (en) * 2020-01-21 2020-06-16 腾讯科技(深圳)有限公司 Data processing method, device, terminal and readable storage medium
CN111460257A (en) * 2020-03-27 2020-07-28 北京百度网讯科技有限公司 Thematic generation method and device, electronic equipment and storage medium
CN111460257B (en) * 2020-03-27 2023-10-31 北京百度网讯科技有限公司 Thematic generation method, apparatus, electronic device and storage medium
CN111862970A (en) * 2020-06-05 2020-10-30 珠海高凌信息科技股份有限公司 False propaganda treatment application method and device based on intelligent voice robot
CN111783424A (en) * 2020-06-17 2020-10-16 泰康保险集团股份有限公司 Text clause dividing method and device
CN111783424B (en) * 2020-06-17 2024-02-13 泰康保险集团股份有限公司 Text sentence dividing method and device
CN112183114A (en) * 2020-08-10 2021-01-05 招联消费金融有限公司 Model training and semantic integrity recognition method and device
CN112381038A (en) * 2020-11-26 2021-02-19 中国船舶工业系统工程研究院 Image-based text recognition method, system and medium
CN112381038B (en) * 2020-11-26 2024-04-19 中国船舶工业系统工程研究院 Text recognition method, system and medium based on image
CN112905869A (en) * 2021-03-26 2021-06-04 北京儒博科技有限公司 Adaptive training method and device for language model, storage medium and equipment
CN113256378A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for determining shopping demand of user
CN113256379A (en) * 2021-05-24 2021-08-13 北京小米移动软件有限公司 Method for correlating shopping demands for commodities
CN114329225A (en) * 2022-01-24 2022-04-12 平安国际智慧城市科技股份有限公司 Search method, device, equipment and storage medium based on search statement
CN114329225B (en) * 2022-01-24 2024-04-23 平安国际智慧城市科技股份有限公司 Search method, device, equipment and storage medium based on search statement

Also Published As

Publication number Publication date
WO2020244150A1 (en) 2020-12-10

Similar Documents

Publication Publication Date Title
CN110349568A (en) Speech retrieval method, apparatus, computer equipment and storage medium
CN110795543B (en) Unstructured data extraction method, device and storage medium based on deep learning
CN110516247B (en) Named entity recognition method based on neural network and computer storage medium
CN109918485B (en) Method and device for identifying dishes by voice, storage medium and electronic device
CN110347823A (en) Voice-based user classification method, device, computer equipment and storage medium
CN112084381A (en) Event extraction method, system, storage medium and equipment
CN110517693A (en) Audio recognition method, device, electronic equipment and computer readable storage medium
CN110096572B (en) Sample generation method, device and computer readable medium
CN111966810B (en) Question-answer pair ordering method for question-answer system
CN106649605B (en) Method and device for triggering promotion keywords
CN108108347B (en) Dialogue mode analysis system and method
CN108038099B (en) Low-frequency keyword identification method based on word clustering
CN112732870B (en) Word vector based search method, device, equipment and storage medium
CN109800427B (en) Word segmentation method, device, terminal and computer readable storage medium
CN113704507B (en) Data processing method, computer device and readable storage medium
CN110287307A (en) A kind of search result ordering method, device and server
CN113821605A (en) Event extraction method
CN108491381A (en) A kind of syntactic analysis method of Chinese bipartite structure
CN107122378B (en) Object processing method and device and mobile terminal
JP6555810B2 (en) Similarity calculation device, similarity search device, and similarity calculation program
CN110795562A (en) Map optimization method, device, terminal and storage medium
CN116090450A (en) Text processing method and computing device
CN116401344A (en) Method and device for searching table according to question
CN115879460A (en) Method, device, equipment and medium for identifying new label entity facing text content
CN113468311B (en) Knowledge graph-based complex question and answer method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination