CN108766441B - Voice control method and device based on offline voiceprint recognition and voice recognition - Google Patents
Voice control method and device based on offline voiceprint recognition and voice recognition Download PDFInfo
- Publication number
- CN108766441B CN108766441B CN201810533494.8A CN201810533494A CN108766441B CN 108766441 B CN108766441 B CN 108766441B CN 201810533494 A CN201810533494 A CN 201810533494A CN 108766441 B CN108766441 B CN 108766441B
- Authority
- CN
- China
- Prior art keywords
- voice
- voiceprint
- template
- feature
- command word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/02—Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L17/00—Speaker identification or verification
- G10L17/04—Training, enrolment or model building
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Abstract
A voice control method based on off-line voiceprint recognition and voice recognition comprises the following steps: receiving awakening word voice, and extracting a first voice characteristic and a first voiceprint characteristic of the awakening word voice; checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with a wakeup word voice template and a voiceprint template, and acquiring a first voiceprint code corresponding to the first voiceprint feature; receiving command word voice and extracting a second voiceprint feature of the command word voice; checking whether the second acoustic line characteristic is matched with the acoustic line template or not, and acquiring a second acoustic line code corresponding to the second acoustic line characteristic; checking whether the first voiceprint code is the same as the second voiceprint code, and extracting a second voice feature of the command word; and checking whether the extracted second voice characteristic is matched with the command word voice template or not, acquiring the voice code of the second voice characteristic and generating a corresponding control instruction based on the voice code.
Description
Technical Field
The present application relates to the field of speaker verification technologies, and in particular, to a voice control method based on offline voiceprint recognition and voice recognition and a device for implementing the method.
Background
With the recent aging and popularization of voice recognition technology, a function of issuing a control instruction to an electronic device by voice has been successfully applied to a number of electronic consumer products (for example, Siri function of an apple phone). The above-mentioned voice-based electronic device control technology relates to Speaker Verification (Speaker Verification) technology in voice recognition technology, that is, to confirm whether the relevant voice is issued by a specified user (for example, the holder of a mobile phone or a person having the use authority of the electronic device), and to confirm a control instruction corresponding to the voice content.
Compared with the traditional electronic equipment control technology, the voice-based electronic equipment control technology can provide a more friendly and convenient electronic equipment interactive operation mode for users (for example, the users do not need to input passwords manually to verify the use authority of the users); however, the prior art solutions are unstable due to the fact that the speech itself is easily affected by other conditions (such as background noise and the speaking condition of the speaker itself), and the determination of the speech content and the conversion from natural language to computer language acceptable to the electronic device often require the related devices to be connected with an external database for semantic conversion online. Both of these problems increase the cost of using voice-based electronic device control techniques.
Disclosure of Invention
The application aims to solve the defects of the prior art, and provides a voice control method and device based on offline voiceprint recognition and voice recognition, which can achieve the effects of realizing the control of electronic equipment based on voice offline and reducing the influence of external conditions on voice recognition as much as possible.
In order to achieve the above object, the present application first proposes a voice control method based on offline voiceprint recognition and voice recognition, which includes the following steps: receiving awakening word voice, and extracting a first voice characteristic and a first voiceprint characteristic of the awakening word voice; checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with the awakening word voice template and the voiceprint template, if not, finishing, otherwise, acquiring a first voiceprint code corresponding to the first voiceprint feature; receiving command word voice and extracting a second voiceprint feature of the command word voice; checking whether the second voiceprint feature is matched with the voiceprint template, if not, ending, otherwise, acquiring a second voiceprint code corresponding to the second voiceprint feature; checking whether the first voiceprint code is the same as the second voiceprint code, if not, finishing, otherwise, extracting a second voice feature of the command word; and checking whether the extracted second voice features are matched with the command word voice template, if not, ending, otherwise, acquiring the voice codes of the second voice features and generating corresponding control instructions based on the voice codes. Wherein the wake word voice template, the command word voice template, and the voiceprint template are stored locally.
In a preferred embodiment of the above method, the wake word tone template and the command word tone template are generated by training pre-collected speech.
In a preferred embodiment of the above method, the voiceprint template is generated by training at least one user's previously collected speech.
In a preferred embodiment of the above method, the correspondence between the speech and the speech code is customized.
Further, in the above preferred embodiment, the correspondence of the voice and the voice code is stored locally.
In a preferred embodiment of the above method, the wake and command word tone templates are trained by dynamically updating the collected speech.
In a preferred embodiment of the above method, the voiceprint template is trained by dynamically updating the speech of at least one designated person.
Secondly, this application still provides a speech control device based on off-line voiceprint recognition and speech recognition, includes following module: the first receiving module is used for receiving the awakening word voice and extracting a first voice feature and a first voiceprint feature of the awakening word voice; the first checking module is used for checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with the awakening word voice template and the voiceprint template, if not, the operation is finished, otherwise, a first voiceprint code corresponding to the first voiceprint feature is obtained; the second receiving module is used for receiving the command word voice and extracting the second voiceprint characteristics of the command word voice; the second checking module is used for checking whether the second voiceprint feature is matched with the voiceprint template or not, if not, the second voiceprint feature is finished, otherwise, a second voiceprint code corresponding to the second voiceprint feature is obtained; the third checking module is used for checking whether the first voiceprint code is the same as the second voiceprint code or not, if not, the flow is carried out, otherwise, the second voice characteristic of the command word is extracted; and the instruction generating module is used for checking whether the extracted second voice characteristics are matched with the command word voice template or not, if not, ending, otherwise, acquiring the voice codes of the second voice characteristics and generating corresponding control instructions based on the voice codes. Wherein the wake word voice template, the command word voice template, and the voiceprint template are stored locally.
In a preferred embodiment of the above apparatus, the wake word tone template and the command word tone template are generated by training pre-collected speech.
In a preferred embodiment of the above apparatus, the voiceprint template is generated by training at least one user's previously collected speech.
In a preferred embodiment of the above apparatus, the correspondence between the speech and the speech code is customized.
Further, in the above preferred embodiment, the correspondence of the voice and the voice code is stored locally.
In a preferred embodiment of the above apparatus, the wake word tone template and the command word tone template are trained by dynamically updating the collected speech.
In a preferred embodiment of the above apparatus, the voiceprint template is trained by dynamically updating the speech of at least one designated person.
Finally, the present application also discloses a computer readable storage medium having stored thereon computer instructions which, when executed by a processor, carry out the steps of the method according to any of the preceding claims.
The beneficial effect of this application does: the identity and the voice content of the speaker can be conveniently confirmed through the local voice template and the voiceprint template, and the usability of the electronic equipment based on the voice is improved.
Drawings
FIG. 1 is a flow diagram illustrating one embodiment of a method for speech control based on offline voiceprint recognition and speech recognition;
fig. 2 is a schematic configuration diagram of the related device according to the embodiment in fig. 1;
FIG. 3 is a diagram illustrating a user-defined correspondence between speech and speech codes;
FIG. 4 is a block diagram of an embodiment of a voice control apparatus based on offline voiceprint recognition and voice recognition.
Detailed Description
The conception, specific structure and technical effects of the present application will be described clearly and completely with reference to the following embodiments and the accompanying drawings, so that the purpose, scheme and effects of the present application can be fully understood. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The same reference numbers will be used throughout the drawings to refer to the same or like parts.
In this context, unless explicitly stated otherwise, the wake word tone refers to a sound emitted by a user having the authority to use the electronic device to verify the user's identity and initiate the electronic device control flow. Only when the wake-up word speech meets certain conditions, the relevant device will further receive other speech indications. Correspondingly, the command word voice means that after confirming the related awakening word voice, the user further sends a voice instruction for issuing a voice with actual specific meaning to the electronic equipment.
FIG. 1 is a flow diagram illustrating one embodiment of a method for speech control based on offline voiceprint recognition and speech recognition. The method comprises the following steps: receiving awakening word voice, and extracting a first voice characteristic and a first voiceprint characteristic of the awakening word voice; checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with the awakening word voice template and the voiceprint template, if not, finishing, otherwise, acquiring a first voiceprint code corresponding to the first voiceprint feature; receiving command word voice and extracting a second voiceprint feature of the command word voice; checking whether the second voiceprint feature is matched with the voiceprint template, if not, ending, otherwise, acquiring a second voiceprint code corresponding to the second voiceprint feature; checking whether the first voiceprint code is the same as the second voiceprint code, if not, finishing, otherwise, extracting a second voice feature of the command word; and checking whether the extracted second voice features are matched with the command word voice template, if not, ending, otherwise, acquiring the voice codes of the second voice features and generating corresponding control instructions based on the voice codes. As shown in the schematic diagram of fig. 2, the wake word sound template, the command word sound template, and the voiceprint template are all stored locally. When the voice feature or the voiceprint feature is not matched with the corresponding template stored locally, the method forcibly ends the process and returns to the stage of waiting for the user to input the awakening word voice again.
The first and second voiceprint features are speech map feature parameters formed on the basis of the stability of human speech and on the physical quantities (such as tone quality, duration, intensity, height and the like) of the collected speech. Further, in one embodiment of the present application, the voiceprint template is generated by extracting voiceprint features of a plurality of users having electronic device usage rights and grouping and sorting the voiceprint features of the plurality of users according to the electronic device usage rights of the users. The voiceprint feature can be formed by analyzing the voice of the user through an algorithm that is conventional in the art, and is not limited in this application.
Similarly, the first speech feature and the second speech feature are feature parameters formed in accordance with words, phonemes, tones, and the like of a specific language for the collected speech based on the specific language. The above-mentioned feature parameters are matched with the feature parameters of a plurality of groups of voices which have been marked with specific meanings in the voice templates (the wake word tone template and the command word tone template) to determine the specific meanings of the voices uttered by the user. The speech features may also be formed by analyzing the user's voice using algorithms that are conventional in the art and are not intended to be limiting.
To reduce the computational load of the system, in one embodiment of the present application, only the first speech feature is extracted after receiving the wake word speech. When the first voice feature is matched with one of the plurality of groups of feature parameters recorded in the awakening word voice template, the first voiceprint feature of the awakening word is extracted; otherwise, if the first voice feature is not matched with all feature parameters in the awakening word voice template, prompting the user to send the awakening word for matching again. The associated match determination (e.g., the matching of the first voice feature to the wake-up voice template, the matching of the first voiceprint feature to the voiceprint template, and the matching of the second voice feature command voice template) can be implemented using conventional matching algorithms in the art, which are not limited in this application.
In one embodiment of the present application, the wake and command word tone templates are generated by training pre-collected speech. Specifically, the user can input the wake word voice and the command word voice for a plurality of times in advance, and the wake word voice template and the command word voice template are improved through supervised training, so that the accuracy of voice recognition is improved.
Similarly, in one embodiment of the present application, the voiceprint template is generated by training at least one user's previously collected speech. Correspondingly, one or more users with the use authority of the electronic equipment input the awakening word voice and the command word voice for multiple times in advance, and the voiceprint template is improved through supervised training, so that the accuracy of voiceprint recognition is improved.
Referring to the schematic diagram of the user-defined correspondence between the voice and the voice code shown in fig. 3, in an embodiment of the present application, the correspondence between the voice and the voice code may be set by itself according to an actual electronic device and a language used by a user. At this time, since the user can customize the corresponding relationship between the voice and the voice code, the specific instruction issued to the electronic device is independent of the specific language in which the user issues the command word voice. For example, receiving and converting command word speech uttered in english or chinese into a corresponding control instruction is achieved by modifying characteristic parameters of speech that has been tagged with a particular meaning in a command word speech template so that the speech in english or chinese is associated with a specified speech code.
Further, in the above-described embodiments of the present application, the correspondence between the voice and the voice code is also stored locally, so that the voice-based electronic device control can be realized without connecting to a network.
In one embodiment of the present application, the wake and command word tone templates are trained by dynamically updating the collected speech. The user can improve the safety factor of the electronic equipment by regularly updating the awakening word voice and the command word voice, and the electronic equipment is prevented from being abused by other persons without using permission.
Similarly, in one embodiment of the present application, the voiceprint template is trained by dynamically updating the voice of at least one designated person, so as to update the voiceprint characteristics of the user in time (especially, the user in the sound-changing period, such as the user in adolescence or the user who just receives the laryngeal surgery).
FIG. 4 is a block diagram of an embodiment of a voice control apparatus based on offline voiceprint recognition and voice recognition. The illustrated apparatus includes the following modules: the first receiving module is used for receiving the awakening word voice and extracting a first voice feature and a first voiceprint feature of the awakening word voice; the first checking module is used for checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with the awakening word voice template and the voiceprint template, if not, the operation is finished, otherwise, a first voiceprint code corresponding to the first voiceprint feature is obtained; the second receiving module is used for receiving the command word voice and extracting the second voiceprint characteristics of the command word voice; the second checking module is used for checking whether the second voiceprint feature is matched with the voiceprint template or not, if not, the second voiceprint feature is finished, otherwise, a second voiceprint code corresponding to the second voiceprint feature is obtained; the third checking module is used for checking whether the first voiceprint code is the same as the second voiceprint code or not, if not, the second voiceprint code is ended, otherwise, the second voice feature of the command word is extracted; and the instruction generating module is used for checking whether the extracted second voice characteristics are matched with the command word voice template or not, if not, ending, otherwise, acquiring the voice codes of the second voice characteristics and generating corresponding control instructions based on the voice codes. As shown in the schematic diagram of fig. 2, the wake word sound template, the command word sound template, and the voiceprint template are all stored locally. When the voice feature or the voiceprint feature is not matched with the corresponding template stored locally, the device returns to a state of waiting for the user to input the awakening word voice again.
In order to reduce the computation amount of the system, in an embodiment of the present application, after receiving the wake word speech, the first receiving module extracts only the first speech feature. When the first checking module determines that the first voice feature is matched with one of the plurality of groups of feature parameters recorded in the awakening word voice template, the first receiving module extracts the first voiceprint feature of the awakening word; otherwise, if the first checking module judges that the first voice feature is not matched with all feature parameters in the awakening word voice template, the first receiving module prompts the user to send the awakening word so as to match again. The associated match determination (e.g., the matching of the first voice feature to the wake-up voice template, the matching of the first voiceprint feature to the voiceprint template, and the matching of the second voice feature command voice template) can be implemented using conventional matching algorithms in the art, which are not limited in this application.
In one embodiment of the present application, the wake and command word tone templates are generated by training pre-collected speech. Specifically, the user can input the wake word voice and the command word voice for a plurality of times in advance, and the wake word voice template and the command word voice template are improved through supervised training, so that the accuracy of voice recognition is improved.
Similarly, in one embodiment of the present application, the voiceprint template is generated by training at least one user's previously collected speech. Correspondingly, one or more users with the use authority of the electronic equipment input the awakening word voice and the command word voice for multiple times in advance, and the voiceprint template is improved through supervised training, so that the accuracy of voiceprint recognition is improved.
Referring to the schematic diagram of the user-defined correspondence between the voice and the voice code shown in fig. 3, in an embodiment of the present application, the instruction generating module may set the correspondence between the voice and the voice code according to the actual electronic device and the language used by the user. At this time, since the user can customize the corresponding relationship between the voice and the voice code, the specific instruction issued to the electronic device is independent of the specific language in which the user issues the command word voice. For example, receiving and converting command word speech uttered in english or chinese into a corresponding control instruction is achieved by modifying characteristic parameters of speech that has been tagged with a particular meaning in a command word speech template so that the speech in english or chinese is associated with a specified speech code.
Further, in the above-described embodiments of the present application, the correspondence between the voice and the voice code is also stored locally, so that the voice-based electronic device control can be realized without connecting to a network.
In one embodiment of the present application, the wake and command word tone templates are trained by dynamically updating the collected speech. The user can improve the safety factor of the electronic equipment by regularly updating the awakening word voice and the command word voice, and the electronic equipment is prevented from being abused by other persons without using permission.
Similarly, in one embodiment of the present application, the voiceprint template is trained by dynamically updating the voice of at least one designated person, so as to update the voiceprint characteristics of the user in time (especially, the user in the sound-changing period, such as the user in adolescence or the user who just receives the laryngeal surgery).
While the description of the present application has been made in considerable detail and with particular reference to a few illustrated embodiments, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed as effectively covering the intended scope of the application by providing a broad interpretation of the claims in view of the prior art with reference to the appended claims. Further, the foregoing describes the present application in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial changes from the present application, not presently foreseen, may nonetheless represent equivalents thereto.
Claims (5)
1. A voice control method based on off-line voiceprint recognition and voice recognition is characterized by comprising the following steps:
receiving awakening word voice, and extracting a first voice characteristic and a first voiceprint characteristic of the awakening word voice;
checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with the awakening word voice template and the voiceprint template, if not, finishing, otherwise, acquiring a first voiceprint code corresponding to the first voiceprint feature;
receiving command word voice and extracting a second voiceprint feature of the command word voice;
checking whether the second voiceprint feature is matched with the voiceprint template, if not, ending, otherwise, acquiring a second voiceprint code corresponding to the second voiceprint feature;
checking whether the first voiceprint code is the same as the second voiceprint code, if not, finishing, otherwise, extracting a second voice feature of the command word;
checking whether the extracted second voice features are matched with the command word voice template or not, if not, ending, otherwise, acquiring voice codes of the second voice features and generating corresponding control instructions based on the voice codes;
wherein, the awakening word voice template, the command word voice template and the voiceprint template are all stored locally; the correspondence of the speech to the speech codes is custom mapped and stored locally, the wake and command tone templates being generated by training pre-collected speech and dynamically updating the collected speech.
2. The method of claim 1, wherein the voiceprint template is generated by training at least one user's previously collected speech.
3. The method of claim 1, wherein the voiceprint template is trained by dynamically updating the voice of at least one designated person.
4. A speech control device based on off-line voiceprint recognition and speech recognition, using the method according to any one of claims 1 to 3, characterized by comprising the following modules:
the first receiving module is used for receiving the awakening word voice and extracting a first voice feature and a first voiceprint feature of the awakening word voice;
the first checking module is used for checking whether the extracted first voice feature and the extracted first voiceprint feature are respectively matched with the awakening word voice template and the voiceprint template, if not, the operation is finished, otherwise, a first voiceprint code corresponding to the first voiceprint feature is obtained;
the second receiving module is used for receiving the command word voice and extracting the second voiceprint characteristics of the command word voice;
the second checking module is used for checking whether the second voiceprint feature is matched with the voiceprint template or not, if not, the second voiceprint feature is finished, otherwise, a second voiceprint code corresponding to the second voiceprint feature is obtained;
the third checking module is used for checking whether the first voiceprint code is the same as the second voiceprint code or not, if not, the second voiceprint code is ended, otherwise, the second voice feature of the command word is extracted;
the instruction generation module is used for checking whether the extracted second voice characteristics are matched with the command word voice template or not, if not, ending, otherwise, acquiring voice codes of the second voice characteristics and generating corresponding control instructions based on the voice codes;
wherein the wake word voice template, the command word voice template, and the voiceprint template are stored locally.
5. A computer-readable storage medium having stored thereon computer instructions, characterized in that the instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810533494.8A CN108766441B (en) | 2018-05-29 | 2018-05-29 | Voice control method and device based on offline voiceprint recognition and voice recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810533494.8A CN108766441B (en) | 2018-05-29 | 2018-05-29 | Voice control method and device based on offline voiceprint recognition and voice recognition |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108766441A CN108766441A (en) | 2018-11-06 |
CN108766441B true CN108766441B (en) | 2020-11-10 |
Family
ID=64003870
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810533494.8A Active CN108766441B (en) | 2018-05-29 | 2018-05-29 | Voice control method and device based on offline voiceprint recognition and voice recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108766441B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109495360A (en) * | 2018-12-18 | 2019-03-19 | 深圳国美云智科技有限公司 | A kind of smart home Internet of Things platform, offline sound control method and system |
CN111768769A (en) * | 2019-03-15 | 2020-10-13 | 阿里巴巴集团控股有限公司 | Voice interaction method, device, equipment and storage medium |
CN110217194B (en) * | 2019-04-28 | 2021-09-07 | 大众问问(北京)信息科技有限公司 | Shared automobile control method and device and electronic equipment |
CN110843725B (en) * | 2019-07-30 | 2021-03-30 | 中国第一汽车股份有限公司 | Vehicle action control method and automobile |
CN110602624B (en) * | 2019-08-30 | 2021-05-25 | Oppo广东移动通信有限公司 | Audio testing method and device, storage medium and electronic equipment |
CN112992133A (en) * | 2019-12-02 | 2021-06-18 | 杭州智芯科微电子科技有限公司 | Sound signal control method, system, readable storage medium and device |
CN111147484B (en) * | 2019-12-25 | 2022-06-14 | 秒针信息技术有限公司 | Account login method and device |
CN111161731A (en) * | 2019-12-30 | 2020-05-15 | 四川虹美智能科技有限公司 | Intelligent off-line voice control device for household electrical appliances |
CN111276141A (en) * | 2020-01-19 | 2020-06-12 | 珠海格力电器股份有限公司 | Voice interaction method and device, storage medium, processor and electronic equipment |
CN111724768A (en) * | 2020-04-22 | 2020-09-29 | 深圳市伟文无线通讯技术有限公司 | System and method for real-time generation of decoded files for offline speech recognition |
CN114444042A (en) * | 2020-10-30 | 2022-05-06 | 华为终端有限公司 | Electronic equipment unlocking method and device |
CN113421567A (en) * | 2021-08-25 | 2021-09-21 | 江西影创信息产业有限公司 | Terminal equipment control method and system based on intelligent glasses and intelligent glasses |
CN113593584B (en) * | 2021-09-27 | 2022-01-18 | 深圳市羽翼数码科技有限公司 | Electronic product voice control system for restraining response time delay |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104658533A (en) * | 2013-11-20 | 2015-05-27 | 中兴通讯股份有限公司 | Terminal unlocking method and device as well as terminal |
CN105845139A (en) * | 2016-05-20 | 2016-08-10 | 北方民族大学 | Off-line speech control method and device |
CN106453859A (en) * | 2016-09-23 | 2017-02-22 | 维沃移动通信有限公司 | Voice control method and mobile terminal |
CN106502649A (en) * | 2016-09-27 | 2017-03-15 | 北京光年无限科技有限公司 | A kind of robot service awakening method and device |
CN106537493A (en) * | 2015-09-29 | 2017-03-22 | 深圳市全圣时代科技有限公司 | Speech recognition system and method, client device and cloud server |
CN106653021A (en) * | 2016-12-27 | 2017-05-10 | 上海智臻智能网络科技股份有限公司 | Voice wake-up control method and device and terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3702867B2 (en) * | 2002-06-25 | 2005-10-05 | 株式会社デンソー | Voice control device |
-
2018
- 2018-05-29 CN CN201810533494.8A patent/CN108766441B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104658533A (en) * | 2013-11-20 | 2015-05-27 | 中兴通讯股份有限公司 | Terminal unlocking method and device as well as terminal |
CN106537493A (en) * | 2015-09-29 | 2017-03-22 | 深圳市全圣时代科技有限公司 | Speech recognition system and method, client device and cloud server |
CN105845139A (en) * | 2016-05-20 | 2016-08-10 | 北方民族大学 | Off-line speech control method and device |
CN106453859A (en) * | 2016-09-23 | 2017-02-22 | 维沃移动通信有限公司 | Voice control method and mobile terminal |
CN106502649A (en) * | 2016-09-27 | 2017-03-15 | 北京光年无限科技有限公司 | A kind of robot service awakening method and device |
CN106653021A (en) * | 2016-12-27 | 2017-05-10 | 上海智臻智能网络科技股份有限公司 | Voice wake-up control method and device and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN108766441A (en) | 2018-11-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108766441B (en) | Voice control method and device based on offline voiceprint recognition and voice recognition | |
WO2021159688A1 (en) | Voiceprint recognition method and apparatus, and storage medium and electronic apparatus | |
CN106373575B (en) | User voiceprint model construction method, device and system | |
AU2016216737B2 (en) | Voice Authentication and Speech Recognition System | |
US6766295B1 (en) | Adaptation of a speech recognition system across multiple remote sessions with a speaker | |
US5303299A (en) | Method for continuous recognition of alphanumeric strings spoken over a telephone network | |
CN109584860B (en) | Voice wake-up word definition method and system | |
WO2019227580A1 (en) | Voice recognition method, apparatus, computer device, and storage medium | |
US20060217978A1 (en) | System and method for handling information in a voice recognition automated conversation | |
AU2013203139A1 (en) | Voice authentication and speech recognition system and method | |
JPH0354600A (en) | Method of verifying identity of unknown person | |
CN100524459C (en) | Method and system for speech recognition | |
CN109272991B (en) | Voice interaction method, device, equipment and computer-readable storage medium | |
EP3989217A1 (en) | Method for detecting an audio adversarial attack with respect to a voice input processed by an automatic speech recognition system, corresponding device, computer program product and computer-readable carrier medium | |
CN110175016A (en) | Start the method for voice assistant and the electronic device with voice assistant | |
US7844459B2 (en) | Method for creating a speech database for a target vocabulary in order to train a speech recognition system | |
JPS62502571A (en) | Personal identification through voice analysis | |
JP2018021953A (en) | Voice interactive device and voice interactive method | |
CN111128127A (en) | Voice recognition processing method and device | |
US20080243498A1 (en) | Method and system for providing interactive speech recognition using speaker data | |
JP2009086207A (en) | Minute information generation system, minute information generation method, and minute information generation program | |
CN112233679A (en) | Artificial intelligence speech recognition system | |
JPS63500126A (en) | speaker verification device | |
US20230153408A1 (en) | Methods and systems for training a machine learning model and authenticating a user with the model | |
Ali et al. | Voice Reminder Assistant based on Speech Recognition and Speaker Identification using Kaldi |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230811 Address after: 519000, 3rd, 4th, and 5th floors of Industrial Building 1, Ecaoshan Gas Depot, Qianshan, Zhuhai, Guangdong Province Patentee after: ZHUHAI RONGTAI ELECTRONICS Co.,Ltd. Address before: Courtyard 802, No. 66 Hongjixuan, Jingshan Road, Jida, Zhuhai City, Guangdong Province, 519015 Patentee before: GUANGDONG SHENGJIANGJUN TECHNOLOGY Co.,Ltd. |
|
TR01 | Transfer of patent right |