CN108877773A - A kind of audio recognition method and electronic equipment - Google Patents

A kind of audio recognition method and electronic equipment Download PDF

Info

Publication number
CN108877773A
CN108877773A CN201810602734.5A CN201810602734A CN108877773A CN 108877773 A CN108877773 A CN 108877773A CN 201810602734 A CN201810602734 A CN 201810602734A CN 108877773 A CN108877773 A CN 108877773A
Authority
CN
China
Prior art keywords
sound
user
electronic equipment
target
factor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810602734.5A
Other languages
Chinese (zh)
Other versions
CN108877773B (en
Inventor
杨昊民
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201810602734.5A priority Critical patent/CN108877773B/en
Publication of CN108877773A publication Critical patent/CN108877773A/en
Application granted granted Critical
Publication of CN108877773B publication Critical patent/CN108877773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/06Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
    • G10L15/065Adaptation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Abstract

The present invention relates to technical field of electronic equipment, a kind of audio recognition method and electronic equipment are disclosed, including:Electronic equipment extracts the sound factor from the voice that user pre-enters, and according to preset Phonetic Speech Development rule model, generates using the sound factor as the sound variation curve of the user of foundation;Electronic equipment can go out the current speech of user according to sound variation Curves Recognition.Implement the embodiment of the present invention, it can be according to the voice for the electronic device user being obtained ahead of time, to according to the sound factor in voice, the sound variation curve of user is generated in conjunction with Phonetic Speech Development rule model, so that electronic equipment accurately identifies the current sound of the user according to the sound variation curve of user, to improve the accuracy rate of electronic equipment speech recognition.

Description

A kind of audio recognition method and electronic equipment
Technical field
The present invention relates to technical field of electronic equipment, and in particular to a kind of audio recognition method and electronic equipment.
Background technique
Currently, more and more electronic equipments are equipped with speech identifying function on the market, and with the development of artificial intelligence, Many electronic equipments are provided with acoustic control arousal function, and user can input the preset acoustic information of electronic equipment to wake up electronics Equipment, but electronic equipment is usually towards adult when sound model library is arranged, the sound of adult is more stable, distinguishes Knowledge and magnanimity are relatively high, can quickly complete the Sound Match of the acoustic information of adult and sound model library.For in hair For the student for educating the phase, sound can change in the puberty, cause to identify using the sound model library towards adult Sound relative difficulty, the recognition efficiency of puberty student is lower.
Summary of the invention
The embodiment of the present invention discloses a kind of audio recognition method and electronic equipment, can be according to being based on Phonetic Speech Development rule mould The sound variation curve that type generates, improves the accuracy rate of speech recognition.
First aspect of the embodiment of the present invention discloses a kind of audio recognition method, the method includes:
The target sound factor is extracted from the target voice that user pre-enters;
Based on preset Phonetic Speech Development rule model using the target sound factor as foundation, the sound of the user is determined Sound change curve;
According to the current speech of user described in the sound variation Curves Recognition.
As an alternative embodiment, in first aspect of the embodiment of the present invention, it is described according to the sound variation The current speech of user described in Curves Recognition, including:
Detect the current speech of user's input;
According to the sound variation curve, obtain sound variation stage of the sound of the user locating for current date with And the current sound factor of the sound of the user within the sound variation stage;
Using the current sound factor as foundation, the current speech is identified.
As an alternative embodiment, in first aspect of the embodiment of the present invention, it is described to be sent out based on preset voice The regular model of exhibition is using the target sound factor as foundation, before the sound variation curve for determining the user, the method Further include:
The voice messaging of mass users is collected, the voice messaging includes at least the corresponding sound of each user's all age group The sound factor;
It is calculated according to big data calculation method all voice messagings identical to gender, to generate and gender pair The Phonetic Speech Development rule model answered;
It is described to be based on preset Phonetic Speech Development rule model using the target sound factor as foundation, determine the user Sound variation curve, including:
Based on preset Phonetic Speech Development rule model corresponding with the gender of the user, it is with the target sound factor Foundation determines the sound variation curve with user.
It is described to be pre-entered from user as an alternative embodiment, in first aspect of the embodiment of the present invention The target sound factor is extracted in target voice, including:
The vocal print for the target voice that identification user pre-enters;
Extract several vocal print nodes in the vocal print;
Based on several described vocal print nodes, calculates and generate the sound factor that the target voice includes.
As an alternative embodiment, in first aspect of the embodiment of the present invention, it is described according to the sound variation After the current speech of user described in Curves Recognition, the method also includes:
The target instruction target word that the current speech includes is identified by semantic analysis, and detects whether electronic equipment is in blank screen State;
If the electronic equipment is in black state, controls the electronic equipment and execute wake operation and execution and institute State the corresponding operation of target instruction target word;
If the electronic equipment is not in black state, it is corresponding with the target instruction target word to control the electronic equipment execution Operation.
Second aspect of the embodiment of the present invention discloses a kind of electronic equipment, which is characterized in that including:
Extraction unit, for extracting the target sound factor from the target voice that user pre-enters;
Determination unit, for, using the target sound factor as foundation, being determined based on preset Phonetic Speech Development rule model The sound variation curve of the user out;
Recognition unit, the current speech for the user according to the sound variation Curves Recognition.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the recognition unit includes:
Detection sub-unit, for detecting the current speech of user's input;
Subelement is obtained, for according to the sound variation curve, obtaining the sound of the user locating for the current date The sound variation stage and the user the current sound factor of the sound within the sound variation stage;
First identification subelement, for identifying the current speech using the current sound factor as foundation.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the electronic equipment further includes:
Collector unit, for the determination unit be based on preset Phonetic Speech Development rule model with the target sound because Son is foundation, before the sound variation curve for determining the user, collects the voice messaging of mass users, the voice messaging Including at least the corresponding sound factor of each user's all age group;
Generation unit, for being calculated according to big data calculation method all voice messagings identical to gender, To generate the Phonetic Speech Development rule model corresponding with gender;
The determination unit, specifically for based on preset Phonetic Speech Development rule mould corresponding with the gender of the user Type determines the sound variation curve with user using the target sound factor as foundation.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the extraction unit includes:
Second identification subelement, the vocal print for the target voice that user pre-enters for identification;
Subelement is extracted, for extracting several vocal print nodes in the vocal print;
Computation subunit, for calculating and generating the sound that the target voice includes based on several described vocal print nodes The factor.
As an alternative embodiment, in second aspect of the embodiment of the present invention, the electronic equipment further includes:
Detection unit, for the current speech in recognition unit user according to the sound variation Curves Recognition Later, target instruction target word that the current speech includes is identified by semantic analysis, and detects whether electronic equipment is in blank screen shape State;
First control unit, the result for detecting in the detection unit are to control the electronic equipment when being and execute Wake operation and execution operation corresponding with the target instruction target word;
Second control unit when the result for detecting in the detection unit is no, controls the electronic equipment and executes Operation corresponding with the target instruction target word.
The third aspect of the embodiment of the present invention discloses another electronic equipment, including:
It is stored with the memory of executable program code;
The processor coupled with the memory;
The processor calls the executable program code stored in the memory, executes any of first aspect A kind of some or all of method step.
Fourth aspect of the embodiment of the present invention discloses a kind of computer readable storage medium, the computer readable storage medium Store program code, wherein said program code includes the part or complete for executing any one method of first aspect The instruction of portion's step.
The 5th aspect of the embodiment of the present invention discloses a kind of computer program product, when the computer program product is calculating When being run on machine, so that the computer executes some or all of any one method of first aspect step.
The aspect of the embodiment of the present invention the 6th disclose a kind of using distribution platform, and the application distribution platform is for publication calculating Machine program product, wherein when the computer program product is run on computers, so that the computer executes first party Some or all of any one method in face step.
Compared with prior art, the embodiment of the present invention has the advantages that:
In the embodiment of the present invention, electronic equipment extracts the sound factor from the voice that user pre-enters, and according to pre- If Phonetic Speech Development rule model, generate using the sound factor as the sound variation curve of the user of foundation;Electronic equipment can root Go out the current speech of user according to sound variation Curves Recognition.As it can be seen that implement the embodiment of the present invention, it can be according to the electricity being obtained ahead of time The voice of sub- equipment user, to generate the sound of user in conjunction with Phonetic Speech Development rule model according to the sound factor in voice Change curve, so that electronic equipment accurately identifies the current sound of the user according to the sound variation curve of user, to mention The high accuracy rate of electronic equipment speech recognition.
Detailed description of the invention
It to describe the technical solutions in the embodiments of the present invention more clearly, below will be to needed in the embodiment Attached drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the invention, for ability For the those of ordinary skill of domain, without creative efforts, it can also be obtained according to these attached drawings other attached Figure.
Fig. 1 is a kind of flow diagram of audio recognition method disclosed by the embodiments of the present invention;
Fig. 2 is the flow diagram of another audio recognition method disclosed by the embodiments of the present invention;
Fig. 3 is the flow diagram of another audio recognition method disclosed by the embodiments of the present invention;
Fig. 4 is the structural schematic diagram of a kind of electronic equipment disclosed by the embodiments of the present invention;
Fig. 5 is the structural schematic diagram of another electronic equipment disclosed by the embodiments of the present invention;
Fig. 6 is the structural schematic diagram of another electronic equipment disclosed by the embodiments of the present invention;
Fig. 7 is the structural schematic diagram of another electronic equipment disclosed by the embodiments of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that the described embodiment is only a part of the embodiment of the present invention, instead of all the embodiments.Based on this Embodiment in invention, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall within the protection scope of the present invention.
It should be noted that term " includes " and " having " and their any changes in the embodiment of the present invention and attached drawing Shape, it is intended that cover and non-exclusive include.Such as contain the process, method of a series of steps or units, system, product or Equipment is not limited to listed step or unit, but optionally further comprising the step of not listing or unit or optional Ground further includes the other step or units intrinsic for these process, methods, product or equipment.
The embodiment of the present invention discloses a kind of audio recognition method and electronic equipment, can be according to being based on Phonetic Speech Development rule mould The sound variation curve that type generates, improves the accuracy rate of speech recognition.It is described in detail separately below.
Embodiment one
Referring to Fig. 1, Fig. 1 is a kind of flow diagram of audio recognition method disclosed by the embodiments of the present invention.Such as Fig. 1 institute Show, which may comprise steps of:
101, electronic equipment extracts the target sound factor from the target voice that user pre-enters.
In the embodiment of the present invention, electronic equipment can be private tutor's machine, learning machine, study plate etc., in this regard, the present invention is implemented Example is without limitation.User can be the user of any age bracket and the electronic equipment at any age.Target voice can be The voice that the user of electronic equipment inputs when first time using electronic equipment generates electronic equipment according to the target voice Sound variation curve.The sound factor can be the vocal print node for having unique feature in the vocal print of user, with the matched sound of user The quantity of the sound factor is without limitation.
As an alternative embodiment, following steps can also be performed before electronic equipment executes step 101:
Electronic equipment, which detects active user, whether there is register account number;
If it does not, the electronic equipment display output user information collection page, and export voice and collect prompt;And When electronic equipment detects the target voice of user's input, electronic equipment is by the target voice and user information correlation and protects It deposits.
Wherein, implement this embodiment, the target language of user can be got in the initially use electronic equipment of user Sound has allow user directly using speech identifying function, to improve user experience of the user based on electronic equipment.
102, electronic equipment is based on preset Phonetic Speech Development rule model using the target sound factor as foundation, determines user Sound variation curve.
In the embodiment of the present invention, Phonetic Speech Development rule model can be to be analyzed according to the voice messaging of mass users The Phonetic Speech Development trend of analog subscriber, electronic equipment can be according to the Phonetic Speech Development rule models, using the target sound factor as base Plinth calculates the sound variation curve of user.Because the sound of user is to continue variation, but develop in user in the puberty Sound may not continue to change after phase, and therefore, the sound during sound variation curve can be user's development becomes Change trend is also possible to the sound variation curve in user's all one's life, in this regard, the embodiment of the present invention is without limitation.
As an alternative embodiment, following steps can also be performed after electronic equipment executes step 102:
Electronic equipment obtains the current age of a user every prefixed time interval;
Electronic equipment updates the sound variation curve of user according to current age, and deletes unrelated with sound variation curve Information;
Electronic equipment stores the sound variation curve of updated user.
Wherein, implement this embodiment, the corresponding sound variation curve of user can be periodically updated, so that electronic equipment The speech recognition carried out to user is more accurate.
103, electronic equipment is according to the current speech of sound variation Curves Recognition user.
In the embodiment of the present invention, electronic equipment by identification electronic equipment local environment in sound variation Curve Matching Voice, to identify the user of electronic equipment.If what sound variation curve can be generated by Phonetic Speech Development rule model The dry sound factor calculates generation.
It in the method depicted in fig. 1, can be according to the voice for the electronic device user being obtained ahead of time, thus according to voice In the sound factor, in conjunction with Phonetic Speech Development rule model generate user sound variation curve so that electronic equipment is according to user Sound variation curve accurately identify the current sound of the user, to improve the accuracy rate of electronic equipment speech recognition. Phonetic Speech Development rule can also be prestored into the memory of electronic equipment, avoid occur sound variation curve lose so that The case where can not carrying out speech recognition.Furthermore, it is possible to the sound variation curve that the user for generating electronic equipment is proprietary, so that electronics The function of equipment is more humanized.
Embodiment two
Referring to Fig. 2, Fig. 2 is the flow diagram of another audio recognition method disclosed by the embodiments of the present invention.Such as Fig. 2 Shown, which may comprise steps of:
201, electronic equipment collects the voice messaging of mass users, and voice messaging includes at least each user's all age group The corresponding sound factor.
In the embodiment of the present invention, the voice messaging of mass users can be the use that electronic equipment collects all electronic equipments The voice messaging of person is also possible to obtain from third party's voice analysis software, in this regard, the embodiment of the present invention is without limitation.
In the embodiment of the present invention, all age group of user can be divided by preset time interval, can also be with It is voluntarily divided for user, wherein prefixed time interval can be one month, 1 year, 2 years, 5 years etc., in this regard, the present invention is implemented Example is without limitation.
As an alternative embodiment, electronic equipment collect mass users voice messaging mode may include with Lower step:
Electronic equipment obtains the corresponding sound factor of each age bracket of the user of each electronic equipment, wherein user's Age bracket is divided according to default rule;
The age bracket of all sound factors corresponding user and user are associated matching by electronic equipment, and will be every A corresponding all age brackets of user and each age bracket correspond to all sound factors and integrate, to generate and user couple The voice messaging packet answered;
Electronic equipment sends the voice messaging packet to server.
Wherein, implement this embodiment, the detailed voice messaging of all electronic device users can be obtained, and will be each The voice messaging of user is integrated and is stored according to age bracket, so that electronic equipment can call the voice of mass users at any time Information, and improve the accuracy for generating Phonetic Speech Development rule model.
202, electronic equipment is calculated according to big data calculation method all voice messagings identical to gender, to generate Phonetic Speech Development rule model corresponding with gender.
In the embodiment of the present invention, since the sound of male and female is in differing greatly of generating of puberty, therefore, it is necessary to will The acoustic information of male and the acoustic information separate computations of women, thus guarantee generate male Phonetic Speech Development rule model with And the accuracy of the Phonetic Speech Development rule model of women.
As an alternative embodiment, following steps can also be performed after electronic equipment executes step 202:
Electronic equipment analyzes voice rule of development model, with according to the variation of the Phonetic Speech Development rule model by Phonetic Speech Development Regular model partition is several sound variation stages;
Electronic equipment obtains the average age range of corresponding user of each sound variation stage;
The average age range of corresponding user of each sound variation stage is associated by electronic equipment, and is saved To the service equipment for pre-establishing connection with electronic equipment.
Wherein, implement this embodiment, it can be with the sound variation mode of comprehensive analysis mass users, to obtain according to sound The sound variation stage of change of tune mode division, so that the division in sound variation stage is more reasonable.
In the embodiment of the present invention, implement above-mentioned step 201~step 202, it can be by the voice messaging of magnanimity with gender It is divided into two groups for foundation, then two kinds of Phonetic Speech Developments rule model corresponding with gender is calculated by big data, thus according to Gender obtains more accurate Phonetic Speech Development rule model.
Optionally, step 201~step 202 can execute before step 203, can also after step 203 and It is executed before step 204, in this regard, the embodiment of the present invention is without limitation.
203, electronic equipment extracts the target sound factor from the target voice that user pre-enters.
204, electronic equipment is based on preset Phonetic Speech Development rule model corresponding with the gender of user, with target sound because Son is foundation, determines the sound variation curve with user.
In the embodiment of the present invention, the sound of user can be determined according to Phonetic Speech Development rule model corresponding with user's gender Sound change curve, since the sound variation of male and female differs greatly, to the Phonetic Speech Development rule mould of male and female Type, which distinguishes, can make the sound variation curve of determining user more accurate.
205, the current speech of electronic equipment detection user input.
In the embodiment of the present invention, the current speech of user's input can be the voice that user says at random, be also possible to user The voice with instruction of destination application is opened for triggering electronic equipment starting and/or electronic equipment.
206, electronic equipment obtains sound variation rank of the sound of user locating for current date according to sound variation curve The current sound factor of the sound of section and user within the sound variation stage.
207, electronic equipment identifies current speech using the current sound factor as foundation.
In the embodiment of the present invention, implement above-mentioned step 205~step 207, the sound variation curve of available user The current sound factor, by compare the current sound factor and current speech the sound factor, identify and sound variation curve Matched user improves the accuracy of speech recognition.
As an alternative embodiment, electronic equipment using the current sound factor as foundation, identifies the side of current speech Formula may comprise steps of:
Electronic equipment obtains the current speech in electronic equipment local environment;
Electronic equipment identifies several target sound factors that the current voice packet contains;
Electronic equipment judges in several target sound factors with the presence or absence of the current sound obtained in advance with electronic equipment Any one matched target sound factor of the factor;
If it does, electronic equipment determines that the corresponding user of current speech is the pre-stored user of electronic equipment.
Wherein, implement this embodiment, several available sound factors, when any in several sound factors One sound factor is matched with the current sound factor of the user of electronic equipment, so that it may the user for identifying electronic equipment, from And reduce the error of electronic equipment speech recognition user, guarantee the accuracy of electronic equipment speech recognition.
It in the method depicted in fig. 2, can be according to the voice for the electronic device user being obtained ahead of time, thus according to voice In the sound factor, in conjunction with Phonetic Speech Development rule model generate user sound variation curve so that electronic equipment is according to user Sound variation curve accurately identify the current sound of the user, to improve the accuracy rate of electronic equipment speech recognition. The user speech information that magnanimity can also be analyzed by big data technology, can be quickly obtained Phonetic Speech Development rule model, protect The operation efficiency of electronic equipment is demonstrate,proved.In addition, electronic equipment can be according to the exclusive sound variation model of user in noisy ring The user of electronic equipment is uniquely identified out in border, so that user can successfully use the language of electronic equipment in various environment Sound identification function.
Embodiment three
Referring to Fig. 3, Fig. 3 is the flow diagram of another audio recognition method disclosed by the embodiments of the present invention.Such as Fig. 3 Shown, which may comprise steps of:
Step 301~step 302 is identical as step 201~step 202, and the following contents does not repeat them here.
303, the vocal print for the target voice that electronic equipment identification user pre-enters.
In the embodiment of the present invention, vocal print (Voiceprint) is a kind of sound wave spectrum for carrying language message, and vocal print is not only With specificity, also has the characteristics that relative stability.After adult, the sound of people can keep for a long time stablize relatively it is constant, but In the puberty, the sound of people is generally in the state of variation.
304, electronic equipment extracts several vocal print nodes in vocal print.
In the embodiment of the present invention, vocal print node can be the node that can significantly show the feature of user's vocal print, and use The vocal print number of nodes for including in the vocal print at family is without limitation.
305, electronic equipment is based on several vocal print nodes, calculates and generates the sound factor that target voice includes.
In the embodiment of the present invention, electronic equipment can be with several vocal print nodes of comprehensive analysis, so that electronic equipment is from several Analysis obtains the specific sound factor of user in a vocal print node.
In the embodiment of the present invention, implement above-mentioned step 303~step 305, can be got from the vocal print of user voice Vocal print node with individual subscriber feature, to generate the sound factor with individual subscriber feature, to reduce voice knowledge Other difficulty.
306, electronic equipment is based on preset Phonetic Speech Development rule model corresponding with the gender of user, with target sound because Son is foundation, determines the sound variation curve with user.
307, electronic equipment is according to the current speech of sound variation Curves Recognition user.
308, the target instruction target word that electronic equipment includes by semantic analysis identification current speech, and whether detect electronic equipment In black state, if so, executing step 309;If not, executing step 310.
In the embodiment of the present invention, target instruction target word can be the voice comprising particular words, and different instructions can correspond to (the corresponding word of such as wake up instruction can be in ' small day ', and voice searches topic, and to instruct corresponding word can be ' small for different word Step ', review and corresponding word is instructed to can be ' small ' etc.).The black state of electronic equipment can be electronic equipment and currently locate In shutdown mode, it is also possible to electronic equipment and is currently at standby mode, when electronic equipment is currently at shutdown mode, if electric Sub- equipment will execute wake operation, and electronic equipment needs Auto Power On;When electronic equipment is currently at standby mode, if electronics Equipment will execute wake operation, and electronic equipment needs the mode of electronic equipment being changed to operating mode, and opens electronic equipment Display screen.
309, electronic equipment controlling electronic devices executes wake operation and executes operation corresponding with target instruction target word.
310, electronic equipment controlling electronic devices executes operation corresponding with target instruction target word.
In the embodiment of the present invention, implement above-mentioned step 308~step 310, it can be directly according to the current speech of user Electronic equipment is opened, and opens user and wants the application program opened, the step of user is using electronic equipment is simplified, also mentions The efficiency that user uses electronic equipment is risen.
For example, electronic equipment can be private tutor's machine, and the machine of teaching in a family identifies that current speech is the user's of private tutor's machine When voice, private tutor's machine can identify the content of current speech;The machine of teaching in a family identifies that the content of current speech is " small step small step " When, private tutor's machine may determine that the target instruction target word that the content includes, and the target instruction target word that " small step small step " includes, which can be, opens private tutor The voice of machine searches topic function, therefore private tutor's machine needs to open voice and searches topic function;Private tutor's machine may determine that the current shape of private tutor's machine Whether state is in black state, if being in black state, private tutor's machine needs to light the display screen of private tutor's machine;Private tutor's machine is being lighted When display screen, the page or the application program unlatching that topic function is searched comprising voice can be directly triggered;If being not at black state, Private tutor's machine can trigger the page or the application program unlatching that topic function is searched comprising voice immediately.In addition to this, private tutor's machine can be with Detection includes more multi-functional instruction, and more multi-functional includes but is not limited to that voice searches topic function, test function, audio and/or view Frequency learning functionality, takes pictures and searches topic function and review function etc. at notes function.It, can by the target instruction target word in identification current speech Quickly to trigger function starting corresponding with target instruction target word, the efficiency that user uses private tutor's machine is improved.
It in the method depicted in fig. 3, can be according to the voice for the electronic device user being obtained ahead of time, thus according to voice In the sound factor, in conjunction with Phonetic Speech Development rule model generate user sound variation curve so that electronic equipment is according to user Sound variation curve accurately identify the current sound of the user, to improve the accuracy rate of electronic equipment speech recognition. It can also calculate by the vocal print node of extraction user voice and generate the sound factor, so that the sound that electronic equipment is calculated The factor is more accurate.
Example IV
Referring to Fig. 4, Fig. 4 is the structural schematic diagram of a kind of electronic equipment disclosed by the embodiments of the present invention.As shown in figure 4, The electronic equipment may include:
Extraction unit 401, for extracting the target sound factor from the target voice that user pre-enters.
As an alternative embodiment, extraction unit 401 can be also used for:
Detecting active user whether there is register account number;
If it does not, the display output user information collection page, and export voice and collect prompt;And works as and detect When the target voice of user's input, by the target voice and user information correlation and save.
Wherein, implement this embodiment, the target language of user can be got in the initially use electronic equipment of user Sound has allow user directly using speech identifying function, to improve user experience of the user based on electronic equipment.
Determination unit 402, the target sound for being extracted based on preset Phonetic Speech Development rule model with extraction unit 401 The factor is foundation, determines the sound variation curve of user.
As an alternative embodiment, determination unit 402 can be also used for:
The current age of a user is obtained every prefixed time interval;
The sound variation curve of user is updated according to current age, and deletes the information unrelated with sound variation curve;
The sound variation curve of updated user is stored.
Wherein, implement this embodiment, the corresponding sound variation curve of user can be periodically updated, so that electronic equipment The speech recognition carried out to user is more accurate.
Recognition unit 403, the current speech of the sound variation Curves Recognition user for being determined according to determination unit 402.
As it can be seen that implement electronic equipment described in Fig. 4, can according to the voice for the electronic device user being obtained ahead of time, from And according to the sound factor in voice, the sound variation curve of user is generated in conjunction with Phonetic Speech Development rule model, so that electronics is set It is standby that the current sound of the user is accurately identified according to the sound variation curve of user, to improve electronic equipment speech recognition Accuracy rate.Phonetic Speech Development rule can also be prestored into the memory of electronic equipment, avoid and sound variation curve occur It loses so that it cannot the case where carrying out speech recognition.Furthermore, it is possible to the sound variation song that the user for generating electronic equipment is proprietary Line, so that the function of electronic equipment is more humanized.
Embodiment five
Referring to Fig. 5, Fig. 5 is the structural schematic diagram of another electronic equipment disclosed by the embodiments of the present invention.Wherein, Fig. 5 Shown in electronic equipment be that electronic equipment as shown in Figure 4 optimizes.Compared with electronic equipment shown in Fig. 4, Fig. 5 Shown in the recognition unit 403 of electronic equipment may include:
Detection sub-unit 4031, for detecting the current speech of user's input.
Subelement 4032 is obtained, the sound variation curve for determining according to determination unit 402, the sound for obtaining user exists The current sound factor of the sound of sound variation stage and user locating for current date within the sound variation stage.
First identification subelement 4033, for identifying using the current sound factor for obtaining the acquisition of subelement 4032 as foundation The current speech that detection sub-unit 4031 detects.
As an alternative embodiment, the first identification subelement 4033 is using the current sound factor as foundation, identification is worked as The mode of preceding voice can be:
Obtain the current speech in electronic equipment local environment;
Identify several target sound factors that the current voice packet contains;
Judging whether there is in several target sound factors matches with the current sound factor that electronic equipment obtains in advance Any one target sound factor;
If it does, determining that the corresponding user of current speech is the pre-stored user of electronic equipment.
Wherein, implement this embodiment, several available sound factors, when any in several sound factors One sound factor is matched with the current sound factor of the user of electronic equipment, so that it may the user for identifying electronic equipment, from And reduce the error of electronic equipment speech recognition user, guarantee the accuracy of electronic equipment speech recognition.
In the embodiment of the present invention, the current sound factor of the sound variation curve of available user is current by comparing The sound factor of the sound factor and current speech, identify with the user of sound variation Curve Matching, improve speech recognition Accuracy.
As an alternative embodiment, electronic equipment shown in fig. 5 can also include:
Collector unit 404, for being based on preset Phonetic Speech Development rule model in determination unit 402 with the target sound factor For foundation, before the sound variation curve for determining user, the voice messaging of mass users is collected, voice messaging includes at least every The corresponding sound factor of a user's all age group;
Generation unit 405, the identical all languages of gender for being collected according to big data calculation method to collector unit 404 Message breath is calculated, to generate Phonetic Speech Development rule model corresponding with gender.
Wherein, implement this embodiment, can be that foundation is divided into two groups, then passes through with gender by the voice messaging of magnanimity Two kinds of Phonetic Speech Developments rule model corresponding with gender is calculated in big data, to obtain more accurate voice according to gender Rule of development model.
As an alternative embodiment, the mode that collector unit 404 collects the voice messaging of mass users can be:
Obtain the corresponding sound factor of each age bracket of the user of each electronic equipment, wherein the age bracket root of user It is divided according to default rule;
The age bracket of all sound factors corresponding user and user are associated matching, and by each user couple All age brackets and each age bracket answered correspond to all sound factors and are integrated, to generate voice corresponding to the user Packet;
The voice messaging packet is sent to server.
Wherein, implement this embodiment, the detailed voice messaging of all electronic device users can be obtained, and will be each The voice messaging of user is integrated and is stored according to age bracket, so that electronic equipment can call the voice of mass users at any time Information, and improve the accuracy for generating Phonetic Speech Development rule model.
As an alternative embodiment, generation unit 405 can be also used for:
Analyze voice rule of development model, with according to the variation of the Phonetic Speech Development rule model by Phonetic Speech Development rule model It is divided into several sound variations stage;
Obtain the average age range of corresponding user of each sound variation stage;
The average age range of corresponding user of each sound variation stage is associated, and save to and electronics Equipment pre-establishes the service equipment of connection.
Wherein, implement this embodiment, it can be with the sound variation mode of comprehensive analysis mass users, to obtain according to sound The sound variation stage of change of tune mode division, so that the division in sound variation stage is more reasonable.
As an alternative embodiment, determination unit 402 is executed based on preset Phonetic Speech Development rule model to mention The target sound factor for taking unit 401 to extract is foundation, determines that the mode of the sound variation curve of user is specifically as follows:
Based on preset Phonetic Speech Development rule model corresponding with the gender of user, using the target sound factor as foundation, really Make the sound variation curve with user.
Wherein, implement this embodiment, can be determined according to Phonetic Speech Development rule model corresponding with user's gender The sound variation curve of user sends out the voice of male and female since the sound variation of male and female differs greatly Opening up regular model and distinguishing can make the sound variation curve of determining user more accurate.
As it can be seen that implement electronic equipment described in Fig. 5, can according to the voice for the electronic device user being obtained ahead of time, from And according to the sound factor in voice, the sound variation curve of user is generated in conjunction with Phonetic Speech Development rule model, so that electronics is set It is standby that the current sound of the user is accurately identified according to the sound variation curve of user, to improve electronic equipment speech recognition Accuracy rate.The user speech information that magnanimity can also be analyzed by big data technology can be quickly obtained Phonetic Speech Development rule Model is restrained, ensure that the operation efficiency of electronic equipment.In addition, electronic equipment can exist according to the exclusive sound variation model of user The user of electronic equipment is uniquely identified out in noisy environment, so that user can successfully use electronics in various environment The speech identifying function of equipment.
Embodiment six
Referring to Fig. 6, Fig. 6 is the structural schematic diagram of another electronic equipment disclosed by the embodiments of the present invention.Wherein, Fig. 6 Shown in electronic equipment be that electronic equipment as shown in Figure 5 optimizes.Compared with electronic equipment shown in fig. 5, Fig. 6 Shown in the extraction unit 401 of electronic equipment may include:
Second identification subelement 4011, the vocal print for the target voice that user pre-enters for identification.
Subelement 4012 is extracted, for extracting several vocal print sections in the vocal print that the second identification subelement 4011 identifies Point.
Computation subunit 4013, for calculating and generating mesh based on several vocal print nodes for extracting the extraction of subelement 4012 The sound factor that poster sound includes.
In the embodiment of the present invention, the vocal print node with individual subscriber feature can be got from the vocal print of user voice, To generate the sound factor with individual subscriber feature, to reduce the difficulty of speech recognition.
As an alternative embodiment, electronic equipment shown in fig. 6 can also include:
Detection unit 406, for after recognition unit 403 is according to the current speech of sound variation Curves Recognition user, The target instruction target word that current speech includes is identified by semantic analysis, and detects whether electronic equipment is in black state;
First control unit 407, the result for detecting in detection unit 406 are when being, controlling electronic devices execution is called out It wakes up and operates and execute operation corresponding with target instruction target word;
Second control unit 408, when result for detecting in detection unit 406 is no, controlling electronic devices execute with The corresponding operation of target instruction target word.
Wherein, implement this embodiment, electronic equipment directly can be opened according to the current speech of user, and open User wants the application program opened, and simplifies the step of user is using electronic equipment, also improves user and uses electronic equipment Efficiency.
As it can be seen that implement electronic equipment described in Fig. 6, can according to the voice for the electronic device user being obtained ahead of time, from And according to the sound factor in voice, the sound variation curve of user is generated in conjunction with Phonetic Speech Development rule model, so that electronics is set It is standby that the current sound of the user is accurately identified according to the sound variation curve of user, to improve electronic equipment speech recognition Accuracy rate.It can also calculate by the vocal print node of extraction user voice and generate the sound factor, so that electronic equipment calculates The sound factor arrived is more accurate.
Embodiment seven
Referring to Fig. 7, Fig. 7 is the structural schematic diagram of another electronic equipment disclosed by the embodiments of the present invention.Such as Fig. 7 institute Show, which may include:
It is stored with the memory 701 of executable program code;
The processor 702 coupled with memory 701;
Wherein, processor 702 calls the executable program code stored in memory 701, executes the above each method and implements Some or all of method in example step.
A kind of computer readable storage medium is also disclosed in the embodiment of the present invention, wherein computer-readable recording medium storage Program code, wherein program code includes for executing some or all of the method in above each method embodiment step Instruction.
A kind of computer program product is also disclosed in the embodiment of the present invention, wherein when computer program product on computers When operation, so that computer executes some or all of the method in such as above each method embodiment step.
The embodiment of the present invention is also disclosed a kind of using distribution platform, wherein using distribution platform for issuing computer journey Sequence product, wherein when computer program product is run on computers, so that computer executes such as the above each method embodiment In some or all of method step.
It should be understood that embodiment described in this description belongs to alternative embodiment, related actions and modules is not It must be necessary to the present invention.Those skilled in the art should also know that magnitude of the sequence numbers of the above procedures are not intended to Execution sequence it is inevitable successively, the execution sequence of each process should be determined by its function and internal logic, without coping with the present invention The implementation process of embodiment constitutes any restriction.
In embodiment provided by the present invention, it should be appreciated that " B corresponding with A " indicates that B is associated with A, can be with according to A Determine B.It is also to be understood that determine that B is not meant to determine B only according to A according to A, it can also be according to A and/or other information Determine B.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage Medium include read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), programmable read only memory (Programmable Read-only Memory, PROM), erasable programmable is read-only deposits Reservoir (Erasable Programmable Read Only Memory, EPROM), disposable programmable read-only memory (One- Time Programmable Read-Only Memory, OTPROM), the electronics formula of erasing can make carbon copies read-only memory (Electrically-Erasable Programmable Read-Only Memory, EEPROM), CD-ROM (Compact Disc Read-Only Memory, CD-ROM) or other disc memories, magnetic disk storage, magnetic tape storage or can For carrying or any other computer-readable medium of storing data.
Above-mentioned unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, can be in one place, or may be distributed over multiple nets On network unit.Some or all of units can be selected to achieve the purpose of the solution of this embodiment according to the actual needs.
In addition, each functional unit in various embodiments of the present invention can integrate in one processing unit, it is also possible to Each unit physically exists alone, and can also be integrated in one unit with two or more units.Above-mentioned integrated unit Both it can take the form of hardware realization, can also realize in the form of software functional units.
If above-mentioned integrated unit is realized in the form of SFU software functional unit and when sold or used as an independent product, It can store in a retrievable memory of computer.Based on this understanding, technical solution of the present invention substantially or Person says all or part of of the part that contributes to existing technology or the technical solution, can be in the form of software products It embodies, which is stored in a memory, including several requests are with so that a computer is set Standby (can be personal computer, server or network equipment etc., specifically can be the processor in computer equipment) executes Some or all of each embodiment above method of the invention step.
A kind of audio recognition method disclosed by the embodiments of the present invention and electronic equipment are described in detail above, herein In apply that a specific example illustrates the principle and implementation of the invention, the explanation of above example is only intended to sides Assistant solves method and its core concept of the invention;At the same time, for those skilled in the art, think of according to the present invention Think, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as pair Limitation of the invention.

Claims (10)

1. a kind of audio recognition method, which is characterized in that the method includes:
The target sound factor is extracted from the target voice that user pre-enters;
Based on preset Phonetic Speech Development rule model using the target sound factor as foundation, determine that the sound of the user becomes Change curve;
According to the current speech of user described in the sound variation Curves Recognition.
2. the method according to claim 1, wherein the user according to the sound variation Curves Recognition Current speech, including:
Detect the current speech of user's input;
According to the sound variation curve, sound variation stage and institute of the sound of the user locating for current date are obtained State the current sound factor of the sound of user within the sound variation stage;
Using the current sound factor as foundation, the current speech is identified.
3. method according to claim 1 or 2, which is characterized in that it is described based on preset Phonetic Speech Development rule model with The target sound factor is foundation, before the sound variation curve for determining the user, the method also includes:
Collect mass users voice messaging, the voice messaging include at least the corresponding sound of each user's all age group because Son;
It is calculated according to big data calculation method all voice messagings identical to gender, it is corresponding with gender to generate The Phonetic Speech Development rule model;
It is described to be based on preset Phonetic Speech Development rule model using the target sound factor as foundation, determine the sound of the user Sound change curve, including:
Based on preset Phonetic Speech Development rule model corresponding with the gender of the user, with the target sound factor be according to According to determining the sound variation curve with user.
4. described in any item methods according to claim 1~3, which is characterized in that the target language pre-entered from user The target sound factor is extracted in sound, including:
The vocal print for the target voice that identification user pre-enters;
Extract several vocal print nodes in the vocal print;
Based on several described vocal print nodes, calculates and generate the sound factor that the target voice includes.
5. method according to any one of claims 1 to 4, which is characterized in that described to be known according to the sound variation curve After the current speech of the not described user, the method also includes:
The target instruction target word that the current speech includes is identified by semantic analysis, and detects whether electronic equipment is in blank screen shape State;
If the electronic equipment is in black state, controls the electronic equipment and execute wake operation and execute and the mesh Mark instructs corresponding operation;
If the electronic equipment is not in black state, controls the electronic equipment and execute behaviour corresponding with the target instruction target word Make.
6. a kind of electronic equipment, which is characterized in that including:
Extraction unit, for extracting the target sound factor from the target voice that user pre-enters;
Determination unit, for, using the target sound factor as foundation, determining institute based on preset Phonetic Speech Development rule model State the sound variation curve of user;
Recognition unit, the current speech for the user according to the sound variation Curves Recognition.
7. electronic equipment according to claim 6, which is characterized in that the recognition unit includes:
Detection sub-unit, for detecting the current speech of user's input;
Subelement is obtained, for obtaining sound of the sound of the user locating for current date according to the sound variation curve The current sound factor of sound changes phase and the sound of the user within the sound variation stage;
First identification subelement, for identifying the current speech using the current sound factor as foundation.
8. electronic equipment according to claim 6 or 7, which is characterized in that the electronic equipment further includes:
Collector unit, for being with the target sound factor based on preset Phonetic Speech Development rule model in the determination unit Foundation before the sound variation curve for determining the user, collects the voice messaging of mass users, the voice messaging is at least Include the corresponding sound factor of each user's all age group;
Generation unit, for being calculated according to big data calculation method all voice messagings identical to gender, with life At the Phonetic Speech Development rule model corresponding with gender;
The determination unit, specifically for based on preset Phonetic Speech Development rule model corresponding with the gender of the user, with The target sound factor is foundation, determines the sound variation curve with user.
9. according to the described in any item electronic equipments of claim 6~8, which is characterized in that the extraction unit includes:
Second identification subelement, the vocal print for the target voice that user pre-enters for identification;
Subelement is extracted, for extracting several vocal print nodes in the vocal print;
Computation subunit, for calculating and generating the sound factor that the target voice includes based on several described vocal print nodes.
10. according to the described in any item electronic equipments of claim 6~9, which is characterized in that the electronic equipment further includes:
Detection unit, for recognition unit user according to the sound variation Curves Recognition current speech it Afterwards, target instruction target word that the current speech includes is identified by semantic analysis, and detects whether electronic equipment is in black state;
First control unit, the result for detecting in the detection unit are when being, to control the electronic equipment and execute wake-up Operation and execution operation corresponding with the target instruction target word;
Second control unit when the result for detecting in the detection unit is no, controls the electronic equipment execution and institute State the corresponding operation of target instruction target word.
CN201810602734.5A 2018-06-12 2018-06-12 Voice recognition method and electronic equipment Active CN108877773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810602734.5A CN108877773B (en) 2018-06-12 2018-06-12 Voice recognition method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810602734.5A CN108877773B (en) 2018-06-12 2018-06-12 Voice recognition method and electronic equipment

Publications (2)

Publication Number Publication Date
CN108877773A true CN108877773A (en) 2018-11-23
CN108877773B CN108877773B (en) 2020-07-24

Family

ID=64338194

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810602734.5A Active CN108877773B (en) 2018-06-12 2018-06-12 Voice recognition method and electronic equipment

Country Status (1)

Country Link
CN (1) CN108877773B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109545196A (en) * 2018-12-29 2019-03-29 深圳市科迈爱康科技有限公司 Audio recognition method, device and computer readable storage medium
CN110336723A (en) * 2019-07-23 2019-10-15 珠海格力电器股份有限公司 Control method and device, the intelligent appliance equipment of intelligent appliance

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103151039A (en) * 2013-02-07 2013-06-12 中国科学院自动化研究所 Speaker age identification method based on SVM (Support Vector Machine)
CN103544393A (en) * 2013-10-23 2014-01-29 北京师范大学 Method for tracking development of language abilities of children
CN104700843A (en) * 2015-02-05 2015-06-10 海信集团有限公司 Method and device for identifying ages
CN105575384A (en) * 2016-01-13 2016-05-11 广东小天才科技有限公司 Method, apparatus and equipment for automatically adjusting play resource according to the level of user
CN106200886A (en) * 2015-04-30 2016-12-07 包伯瑜 A kind of intelligent movable toy manipulated alternately based on language and toy using method
US20170213555A1 (en) * 2015-05-22 2017-07-27 Kabushiki Kaisha Toshiba Minutes taking system, minutes taking method, and image forming apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103151039A (en) * 2013-02-07 2013-06-12 中国科学院自动化研究所 Speaker age identification method based on SVM (Support Vector Machine)
CN103544393A (en) * 2013-10-23 2014-01-29 北京师范大学 Method for tracking development of language abilities of children
CN104700843A (en) * 2015-02-05 2015-06-10 海信集团有限公司 Method and device for identifying ages
CN106200886A (en) * 2015-04-30 2016-12-07 包伯瑜 A kind of intelligent movable toy manipulated alternately based on language and toy using method
US20170213555A1 (en) * 2015-05-22 2017-07-27 Kabushiki Kaisha Toshiba Minutes taking system, minutes taking method, and image forming apparatus
CN105575384A (en) * 2016-01-13 2016-05-11 广东小天才科技有限公司 Method, apparatus and equipment for automatically adjusting play resource according to the level of user

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
郑方 等: "声纹识别技术及其应用现状", 《信息安全研究》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109545196A (en) * 2018-12-29 2019-03-29 深圳市科迈爱康科技有限公司 Audio recognition method, device and computer readable storage medium
CN109545196B (en) * 2018-12-29 2022-11-29 深圳市科迈爱康科技有限公司 Speech recognition method, device and computer readable storage medium
CN110336723A (en) * 2019-07-23 2019-10-15 珠海格力电器股份有限公司 Control method and device, the intelligent appliance equipment of intelligent appliance

Also Published As

Publication number Publication date
CN108877773B (en) 2020-07-24

Similar Documents

Publication Publication Date Title
CN106601237B (en) Interactive voice response system and voice recognition method thereof
US6314411B1 (en) Artificially intelligent natural language computational interface system for interfacing a human to a data processor having human-like responses
CN109635096A (en) A kind of dictation reminding method and electronic equipment
US20150058014A1 (en) System and method for managing conversation
CN106796787A (en) The linguistic context carried out using preceding dialog behavior in natural language processing is explained
CN108538293B (en) Voice awakening method and device and intelligent device
CN109166564A (en) For the method, apparatus and computer readable storage medium of lyrics text generation melody
CN108920568A (en) A kind of searching method and electronic equipment based on electronic equipment
CN108735210A (en) A kind of sound control method and terminal
JP2004355003A (en) System and method for user modelling to enhance named entity recognition
TW201113870A (en) Method for analyzing sentence emotion, sentence emotion analyzing system, computer readable and writable recording medium and multimedia device
CN109410664A (en) A kind of pronunciation correction method and electronic equipment
CN108959483A (en) A kind of study householder method and electronic equipment based on search
CN108920450A (en) A kind of knowledge point methods of review and electronic equipment based on electronic equipment
CN109086455A (en) A kind of construction method and facility for study of speech recognition library
CN109583401A (en) It is a kind of automatically generate answer search topic method and user equipment
CN109902187A (en) A kind of construction method and device, terminal device of feature knowledge map
CN108877773A (en) A kind of audio recognition method and electronic equipment
CN108766431A (en) It is a kind of that method and electronic equipment are automatically waken up based on speech recognition
KR20060070605A (en) Using domain dialogue model and language model in intelligent robot speech recognition service device and method
CN109410935A (en) A kind of destination searching method and device based on speech recognition
CN109671309A (en) A kind of mistake pronunciation recognition methods and electronic equipment
CN109658776A (en) A kind of detection method that reciting fluency and electronic equipment
CN106710591A (en) Voice customer service system for power terminal
CN109582780A (en) A kind of intelligent answer method and device based on user emotion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant