CN110600058A - Method and device for awakening voice assistant based on ultrasonic waves, computer equipment and storage medium - Google Patents
Method and device for awakening voice assistant based on ultrasonic waves, computer equipment and storage medium Download PDFInfo
- Publication number
- CN110600058A CN110600058A CN201910858342.XA CN201910858342A CN110600058A CN 110600058 A CN110600058 A CN 110600058A CN 201910858342 A CN201910858342 A CN 201910858342A CN 110600058 A CN110600058 A CN 110600058A
- Authority
- CN
- China
- Prior art keywords
- ultrasonic
- instruction
- voice assistant
- frequency spectrum
- wake
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000001228 spectrum Methods 0.000 claims abstract description 43
- 238000010183 spectrum analysis Methods 0.000 claims abstract description 31
- 238000012544 monitoring process Methods 0.000 claims abstract description 18
- 230000004913 activation Effects 0.000 claims abstract description 13
- 238000012790 confirmation Methods 0.000 claims abstract description 8
- 230000002618 waking effect Effects 0.000 claims description 44
- 230000005236 sound signal Effects 0.000 claims description 32
- 238000004590 computer program Methods 0.000 claims description 14
- 230000003213 activating effect Effects 0.000 claims description 7
- 238000004458 analytical method Methods 0.000 claims description 6
- 238000002604 ultrasonography Methods 0.000 claims description 4
- 238000012545 processing Methods 0.000 abstract description 8
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000007796 conventional method Methods 0.000 abstract description 6
- 238000003062 neural network model Methods 0.000 abstract description 6
- 238000013179 statistical model Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 239000000284 extract Substances 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 2
- 241000282326 Felis catus Species 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/80—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10K—SOUND-PRODUCING DEVICES; METHODS OR DEVICES FOR PROTECTING AGAINST, OR FOR DAMPING, NOISE OR OTHER ACOUSTIC WAVES IN GENERAL; ACOUSTICS NOT OTHERWISE PROVIDED FOR
- G10K15/00—Acoustics not otherwise provided for
- G10K15/04—Sound-producing devices
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
The invention relates to a method and a device for awakening a voice assistant based on ultrasonic waves, computer equipment and a storage medium, wherein the method comprises the following steps: receiving an ultrasonic wave awakening instruction from awakening equipment; analyzing the received ultrasonic wave awakening instruction to obtain a frequency spectrum analysis result; comparing the frequency spectrum analysis result with the frequency spectrum of a preset ultrasonic wave awakening instruction, and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction; and if the voice assistant is matched with the voice assistant, the voice assistant is awakened and activated, and the activation confirmation information is returned. According to the scheme, the intelligent voice equipment acquires ultrasonic audio in a standby state, so that the intelligent voice equipment can be prevented from monitoring user conversation in real time in the standby state, and the privacy of a user is protected; in addition, the difficulty of the ultrasonic audio recognition technology is controllable, compared with the conventional method that the keyword recognition is waken up through a statistical model or a deep neural network model, the requirements on the performance and the memory capacity of a CPU (Central processing Unit) are greatly reduced, and the setting cost of the recognition device is reduced while the recognition precision and the recognition efficiency are ensured.
Description
Technical Field
The invention relates to the field of intelligent terminals, in particular to a method and a device for awakening a voice assistant based on ultrasonic waves, computer equipment and a storage medium.
Background
As smart voice-related technologies mature, there are increasing applications that move away from traditional UI interfaces to interact and control with devices through voice. Typically, such "smart" devices accept user voice instructions through a "voice assistant," which first needs to "wake up" the voice assistant in a standby state before accepting voice control.
The current commonly adopted technical solution is to wake up the voice assistant by voice by customizing a piece of easily distinguishable phrase as a wake-up instruction (such as "hello cat"). In the running process of the equipment with intelligent voice capability, the voice assistant can monitor background conversation in the environment in real time so as to distinguish whether the user uses the awakening instruction equipment awakening language or not.
However, in practical applications, due to the situations of environmental sound interference, simultaneous speaking of multiple persons, homophone interference, etc., the false wake-up situation of the "voice assistant" is prominent (including both cases of false wake up and miss wake up); in addition, since the intelligent voice device needs to have the offline wake-up capability, the intelligent device terminal needs to have the real-time wake-up word judgment capability, a large amount of computing resources are needed, high requirements on the capacities of a CPU and a memory are required, and the cost of the intelligent terminal device is correspondingly increased.
In addition, with the improvement of the attention of people to personal privacy and the increase of privacy disclosure risks, a user has a conflict psychology on the mode that the current voice assistant monitors the user conversation in real time to wait for awakening.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method, a device, computer equipment and a storage medium for awakening a voice assistant based on ultrasonic waves.
In order to achieve the purpose, the invention adopts the following technical scheme: a method for waking up a voice assistant based on ultrasonic waves comprises the following steps:
receiving an ultrasonic wave awakening instruction from awakening equipment;
analyzing the received ultrasonic wave awakening instruction to obtain a frequency spectrum analysis result;
comparing the frequency spectrum analysis result with the frequency spectrum of a preset ultrasonic wave awakening instruction, and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction;
and if the voice assistant is matched with the voice assistant, the voice assistant is awakened and activated, and the activation confirmation information is returned.
The further technical scheme is as follows: before the step of receiving the ultrasonic wave awakening instruction from the awakening device, the method comprises the following steps:
randomly generating an ultrasonic audio signal;
and sending the ultrasonic audio signal to a wake-up device as a preset ultrasonic wake-up instruction.
The further technical scheme is as follows: the step of randomly generating an ultrasonic audio signal comprises:
and randomly acquiring prime number frequencies in a plurality of specific ultrasonic frequency bands, and appointing frequency intensity and phase to obtain ultrasonic audio signals.
The further technical scheme is as follows: the step of analyzing the received ultrasonic wake-up instruction to obtain a spectrum analysis result comprises the following steps:
and converting the received ultrasonic awakening instruction from a time domain to a frequency domain, and extracting frequency spectrum and phase information of the ultrasonic awakening instruction.
The further technical scheme is as follows: after the step of waking up the voice assistant and returning the confirmation activation information, the method comprises the following steps:
determining the direction of a sound source according to the received space-time information of the ultrasonic wake-up instruction;
and forming a pickup beam in the sound source direction, and monitoring and receiving a subsequent voice control instruction.
The invention also adopts the following technical scheme: an apparatus for waking up a voice assistant based on ultrasonic waves comprises,
the instruction receiving unit is used for receiving an ultrasonic wake-up instruction from wake-up equipment;
the instruction analysis unit is used for analyzing the received ultrasonic wake-up instruction to obtain a frequency spectrum analysis result;
the instruction comparison unit is used for comparing the frequency spectrum analysis result with the frequency spectrum of a preset ultrasonic wave awakening instruction and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction;
and the activation unit is used for awakening and activating the voice assistant and returning the activation confirmation information when the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic awakening instruction.
The further technical scheme is as follows: further comprising:
the signal generating unit is used for randomly generating an ultrasonic audio signal;
and the signal presetting unit is used for sending the ultrasonic audio signal to the awakening equipment as a preset ultrasonic awakening instruction.
The further technical scheme is as follows: further comprising:
the sound source positioning unit is used for determining the direction of a sound source according to the received space-time information of the ultrasonic wave awakening instruction;
and the sound source monitoring unit is used for forming a pickup beam in the sound source direction, monitoring and receiving a subsequent voice control instruction.
The invention also adopts the following technical scheme: a computer device comprising a memory having stored thereon a computer program and a processor that, when executed, implements the method of waking up a voice assistant based on ultrasound as in any one of the above.
The invention also adopts the following technical scheme: a storage medium storing a computer program which, when executed by a processor, implements the method for waking up a voice assistant based on ultrasound as claimed in any one of the above.
Compared with the prior art, the invention has the beneficial effects that: the scheme can accurately wake up the voice assistant, simultaneously reserves the sound source direction information, and can be used for accurately positioning the user direction and receiving instructions after waking up; in addition, the intelligent voice equipment collects ultrasonic audio in a standby state, so that real-time monitoring of user conversation during standby of the intelligent voice equipment can be avoided, and user privacy is protected; in addition, the difficulty of the ultrasonic audio recognition technology is controllable, compared with the conventional method that the keyword recognition is waken up through a statistical model or a deep neural network model, the requirements on the performance and the memory capacity of a CPU (Central processing Unit) are greatly reduced, and the setting cost of the recognition device is reduced while the recognition precision and the recognition efficiency are ensured.
The invention is further described below with reference to the accompanying drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a method for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of a method for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention;
FIG. 3 is a sub-flowchart of a method for waking up a voice assistant based on ultrasonic waves according to another embodiment of the present invention;
FIG. 4 is a schematic block diagram of an apparatus for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention;
FIG. 5 is a schematic block diagram of an instruction analysis unit of an apparatus for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of an apparatus for waking up a voice assistant based on ultrasonic waves according to another embodiment of the present invention;
FIG. 7 is a schematic block diagram of a signal generating unit of an apparatus for waking up a voice assistant based on ultrasonic waves according to another embodiment of the present invention;
FIG. 8 is a schematic block diagram of a computer device provided by an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the specification of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic view of an application scenario of a method for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention. Fig. 2 is a schematic flowchart of a method for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention. The method based on the ultrasonic wave awakening voice assistant is applied to the intelligent voice device 100, the intelligent voice device 100 and the awakening device 200 perform data interaction, the awakening device 200 generates an ultrasonic wave awakening instruction and sends the ultrasonic wave awakening instruction to the intelligent voice device 100, the intelligent voice device 100 analyzes and compares the received ultrasonic wave awakening instruction, whether the intelligent voice device 100 needs to be activated or not is confirmed according to a comparison result, the voice control instruction of the user is monitored in real time after the intelligent voice device 100 is activated, the situation that the intelligent voice device 100 intercepts user conversation in real time when in standby is avoided, and user privacy is protected.
Fig. 2 is a flowchart illustrating a method for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S140.
And S110, receiving an ultrasonic wave awakening command from the awakening device.
In this embodiment, a functional module for generating an ultrasonic wake-up instruction is disposed in the wake-up device, and when a user needs to activate a corresponding intelligent voice device 100 (voice assistant), the wake-up device is controlled to generate and send a corresponding ultrasonic wake-up instruction. By utilizing the characteristic that the ultrasonic frequency band is different from the human sound frequency band, the characteristic that the microphone array for picking up sound at the end of the current intelligent voice device 100 can identify the ultrasonic audio signal and the characteristic that the microphone array can judge the signal transmitting direction by utilizing the received ultrasonic audio signal are utilized, and the preset ultrasonic awakening instruction is sent to the intelligent voice device 100 through the awakening device 200 to awaken and activate the target voice device.
The voice assistant carried on the intelligent voice device 100 is firstly in a state of waiting to be activated after being started, and the voice assistant can monitor ultrasonic signals of appointed frequency bands in real time and filter the voice frequency bands to avoid personal privacy disclosure.
And S120, analyzing the received ultrasonic wave awakening instruction to obtain a frequency spectrum analysis result.
In this embodiment, after receiving the ultrasonic wake-up instruction, the ultrasonic wake-up instruction is analyzed, the received ultrasonic wake-up instruction is converted from a time domain to a frequency domain, and frequency spectrum and phase information of the ultrasonic wake-up instruction are extracted, so that the proposed frequency spectrum and phase information are compared with a frequency spectrum of a preset ultrasonic wake-up instruction, and whether the received ultrasonic wake-up instruction is the preset ultrasonic wake-up instruction is determined.
In one embodiment, step S120 includes: and step S121, converting the received ultrasonic wave awakening instruction from a time domain to a frequency domain, and extracting frequency spectrum and phase information of the ultrasonic wave awakening instruction.
S130, comparing the frequency spectrum analysis result with a frequency spectrum of a preset ultrasonic wave awakening instruction, and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction.
In this embodiment, the preset ultrasonic wake-up command itself is also composed of a plurality of frequencies in a specific ultrasonic frequency band, and the intensity and phase of different frequencies are predetermined. Therefore, after the ultrasonic wake-up command is obtained, the frequency spectrum data of the ultrasonic wake-up command needs to be obtained through analysis and conversion for comparing with the frequency spectrum of the preset ultrasonic wake-up command.
And S140, if the voice assistant is matched with the voice assistant, waking up the voice assistant and returning to confirm the activation information.
In this embodiment, after the comparison, if it is determined that the spectrum analysis result matches the spectrum of the preset ultrasonic wake-up instruction, it means that the ultrasonic wake-up instruction is the same as the preset ultrasonic wake-up instruction, and the corresponding intelligent voice device 100 can be woken up and activated, so that the intelligent voice device 100 enters an active state, and meanwhile, the intelligent voice device 100 monitors a subsequent voice control instruction in the sound source direction, and receives and recognizes the voice control instruction for executing further operations.
The scheme can accurately wake up the voice assistant, simultaneously reserves the sound source direction information, and can be used for accurately positioning the user direction and receiving instructions after waking up; in addition, the intelligent voice device 100 collects ultrasonic audio in a standby state, so that real-time monitoring of user conversation during standby of the intelligent voice device 100 can be avoided, and user privacy is protected; in addition, the difficulty of the ultrasonic audio recognition technology is controllable, compared with the conventional method that the keyword recognition is waken up through a statistical model or a deep neural network model, the requirements on the performance and the memory capacity of a CPU (Central processing Unit) are greatly reduced, and the setting cost of the recognition device is reduced while the recognition precision and the recognition efficiency are ensured.
Referring to fig. 3, in another embodiment, a flow chart of a method for waking up a voice assistant based on ultrasonic waves is provided. As shown in fig. 3, the method includes the following steps S210 to S280.
S210, randomly generating an ultrasonic audio signal.
And S220, sending the ultrasonic audio signal to a wake-up device as a preset ultrasonic wake-up instruction.
And S230, receiving an ultrasonic wave awakening command from the awakening device.
And S240, analyzing the received ultrasonic wave awakening instruction to obtain a frequency spectrum analysis result.
And S250, comparing the frequency spectrum analysis result with a frequency spectrum of a preset ultrasonic wave awakening instruction, and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction.
And S260, if the voice assistant is matched with the voice assistant, waking up the voice assistant and returning to confirm the activation information.
And S270, determining the direction of a sound source according to the received space-time information of the ultrasonic wave awakening command.
And S280, forming a pickup beam in the sound source direction, monitoring and receiving a subsequent voice control instruction.
For the above steps S210 to S280, the steps S230 to S260 are similar to the steps S110 to S140 in the previous embodiment, and are not described herein again. The added steps S210, S220, S270, and S280 in the present embodiment are explained in detail below.
In this embodiment, for steps S210 and S220, at the beginning, an ultrasonic audio signal is randomly generated, and when the wake-up device 200 is powered up for the first time, the ultrasonic audio signal is first subjected to master-slave pairing with the intelligent voice device 100 in a wireless communication manner, then the intelligent voice device 100 sends the ultrasonic audio signal as a frequency composition of a preset ultrasonic wake-up instruction to the wake-up device 200 for storage, and the wake-up device 200 may subsequently generate a corresponding ultrasonic wake-up instruction according to the frequency composition of the preset ultrasonic wake-up instruction for waking up/activating the intelligent voice device 100.
Specifically, the ultrasonic audio signal may be composed of 3 random prime number frequencies in a specific ultrasonic frequency band (e.g., 20,000Hz-21,000Hz), and 3 random frequency intensities and phases are agreed to form spectrum data with uniquely identifiable characteristics.
For example, the ultrasonic wake-up command consists of 3 random prime numbers in a specific ultrasonic frequency band (e.g. 20,000Hz-21,000Hz), and it is agreed that signals of the three frequencies will be simultaneously transmitted outwards at an intensity of 60dB, wherein the phase difference between the first frequency, the second frequency and the third frequency (arranged from small to large according to the frequency) is 120 degrees (i.e. 2 pi/3), and the initial phase of the first frequency is 0 degree.
In this embodiment, step S210 includes: step S211, randomly obtaining prime number frequencies in a plurality of specific ultrasonic frequency bands, and appointing frequency intensity and phase to obtain ultrasonic audio signals.
By randomly acquiring prime number frequencies in a plurality of specific ultrasonic frequency bands, the difference between preset ultrasonic wake-up instructions is ensured, and false identification is avoided.
In this embodiment, for steps S270 and S280, after the matching is successful, the intelligent speech device 100 enters the "active" state, and can receive and recognize the speech instruction of the user. Meanwhile, the microphone array of the intelligent speech device 100 determines the direction of the sound source based on the information such as the time delay and the sound intensity of the ultrasonic signal collected by each microphone, detects the speech signal (human voice) in the direction of the sound source in real time, forms a pickup beam in the direction of the sound source, and waits for the user to pass a subsequent instruction.
When the intelligent device is used, the voice assistant is firstly in a state of waiting to be activated after being started, and then monitors ultrasonic signals of an appointed frequency band (such as 20,000HZ-21,000HZ) in real time and filters a human voice frequency band so as to avoid personal privacy disclosure.
When the user directly or indirectly triggers the wake-up device 200 manually, the wake-up device 200 sends out an ultrasonic wake-up instruction according to the appointed frequency, amplitude and phase; after monitoring that an audio signal is monitored in an appointed frequency band, the intelligent voice device 100 converts a received ultrasonic awakening instruction signal from a time domain to a frequency domain, extracts frequency spectrum and phase information of the ultrasonic awakening instruction, compares the frequency spectrum and the phase information with a preset ultrasonic awakening instruction, and if the frequency spectrum and the phase information are matched, the intelligent voice device 100 enters an activated state; meanwhile, the microphone array of the intelligent speech device 100 judges the direction of the sound source based on the information of time delay, sound intensity and the like of the ultrasonic signal collected by each microphone, and forms a pickup beam in the direction of the sound source to wait for the user to pass a subsequent instruction.
The scheme can accurately wake up the voice assistant, simultaneously reserves the sound source direction information, and can be used for accurately positioning the user direction and receiving instructions after waking up; in addition, the intelligent voice device 100 collects ultrasonic audio in a standby state, so that real-time monitoring of user conversation during standby of the intelligent voice device 100 can be avoided, and user privacy is protected; in addition, the difficulty of the ultrasonic audio recognition technology is controllable, compared with the conventional method that the keyword recognition is waken up through a statistical model or a deep neural network model, the requirements on the performance and the memory capacity of a CPU (Central processing Unit) are greatly reduced, and the setting cost of the recognition device is reduced while the recognition precision and the recognition efficiency are ensured.
FIG. 4 is a schematic block diagram of an apparatus for waking up a voice assistant based on ultrasonic waves according to an embodiment of the present invention. As shown in fig. 4, the present invention also provides a device for waking up a voice assistant based on ultrasonic waves, corresponding to the above method for waking up a voice assistant based on ultrasonic waves. The device for waking up the voice assistant based on ultrasonic waves comprises a unit for executing the method for waking up the voice assistant based on ultrasonic waves, and the device can be configured in a desktop computer, a tablet computer, a portable computer, and other terminals. Specifically, referring to fig. 4, the apparatus for waking up a voice assistant based on ultrasonic waves includes an instruction receiving unit 10, an instruction analyzing unit 20, an instruction comparing unit 30 and an activating unit 40.
An instruction receiving unit 10, configured to receive an ultrasonic wake-up instruction from a wake-up device.
In this embodiment, a functional module for generating an ultrasonic wake-up instruction is disposed in the wake-up device, and when a user needs to activate a corresponding intelligent voice device 100 (voice assistant), the wake-up device is controlled to generate and send a corresponding ultrasonic wake-up instruction. By utilizing the characteristic that the ultrasonic frequency band is different from the human sound frequency band, the characteristic that the microphone array for picking up sound at the end of the current intelligent voice device 100 can identify the ultrasonic audio signal and the characteristic that the microphone array can judge the signal transmitting direction by utilizing the received ultrasonic audio signal are utilized, and the preset ultrasonic awakening instruction is sent to the intelligent voice device 100 through the awakening device 200 to awaken and activate the target voice device.
The voice assistant carried on the intelligent voice device 100 is firstly in a state of waiting to be activated after being started, and the voice assistant can monitor ultrasonic signals of appointed frequency bands in real time and filter the voice frequency bands to avoid personal privacy disclosure.
And the instruction analysis unit 20 is configured to analyze the received ultrasonic wake-up instruction to obtain a spectrum analysis result.
In this embodiment, after receiving the ultrasonic wake-up instruction, the ultrasonic wake-up instruction is analyzed, the received ultrasonic wake-up instruction is converted from a time domain to a frequency domain, and frequency spectrum and phase information of the ultrasonic wake-up instruction are extracted, so that the proposed frequency spectrum and phase information are compared with a frequency spectrum of a preset ultrasonic wake-up instruction, and whether the received ultrasonic wake-up instruction is the preset ultrasonic wake-up instruction is determined.
Specifically, referring to fig. 5, the instruction analyzing unit 20 includes an analyzing and converting module 21, where the analyzing and converting module 21 is configured to perform time-domain to frequency-domain conversion on the received ultrasonic wake-up instruction, and extract frequency spectrum and phase information of the ultrasonic wake-up instruction.
And the instruction comparison unit 30 is configured to compare the frequency spectrum analysis result with a frequency spectrum of a preset ultrasonic wake-up instruction, and determine whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wake-up instruction.
In this embodiment, the preset ultrasonic wake-up command itself is also composed of a plurality of frequencies in a specific ultrasonic frequency band, and the intensity and phase of different frequencies are predetermined. Therefore, after the ultrasonic wake-up command is obtained, the frequency spectrum data of the ultrasonic wake-up command needs to be obtained through analysis and conversion for comparing with the frequency spectrum of the preset ultrasonic wake-up command.
And the activation unit 40 is used for awakening and activating the voice assistant and returning the activation confirmation information when the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic awakening instruction.
In this embodiment, after the comparison, if it is determined that the spectrum analysis result matches the spectrum of the preset ultrasonic wake-up instruction, it means that the ultrasonic wake-up instruction is the same as the preset ultrasonic wake-up instruction, and the corresponding intelligent voice device 100 can be woken up and activated, so that the intelligent voice device 100 enters an active state, and meanwhile, the intelligent voice device 100 monitors a subsequent voice control instruction in the sound source direction, and receives and recognizes the voice control instruction for executing further operations.
In this embodiment, the receiving unit 10, the instruction analyzing unit 20, the instruction comparing unit 30 and the activating unit 40 are integrally disposed in the intelligent voice device 100, and are configured to receive the ultrasonic wake-up instruction from the device 200, further analyze and compare the ultrasonic wake-up instruction, and determine whether the intelligent voice device 100 can be activated.
The scheme can accurately wake up the voice assistant, simultaneously reserves the sound source direction information, and can be used for accurately positioning the user direction and receiving instructions after waking up; in addition, the intelligent voice device 100 collects ultrasonic audio in a standby state, so that real-time monitoring of user conversation during standby of the intelligent voice device 100 can be avoided, and user privacy is protected; in addition, the difficulty of the ultrasonic audio recognition technology is controllable, compared with the conventional method that the keyword recognition is waken up through a statistical model or a deep neural network model, the requirements on the performance and the memory capacity of a CPU (Central processing Unit) are greatly reduced, and the setting cost of the recognition device is reduced while the recognition precision and the recognition efficiency are ensured.
FIG. 6 is a schematic block diagram of an apparatus for waking up a voice assistant based on ultrasonic waves according to another embodiment of the present invention. As shown in fig. 6, the device for waking up a voice assistant based on ultrasonic waves of the present embodiment is added with a signal generating unit 50, a signal presetting unit 60, a sound source positioning unit 70 and a sound source monitoring unit 80 on the basis of the above embodiments.
The signal generating unit 50 is used for randomly generating an ultrasonic audio signal.
And a signal presetting unit 60, configured to issue the ultrasonic audio signal to the wake-up device as a preset ultrasonic wake-up instruction.
In this embodiment, at the beginning, an ultrasonic audio signal is randomly generated, and when the wake-up device 200 is powered on for use for the first time, the ultrasonic audio signal is first subjected to master-slave pairing with the intelligent voice device 100 in a wireless communication manner, then the intelligent voice device 100 sends the ultrasonic audio signal as a frequency composition of a preset ultrasonic wake-up instruction to the wake-up device 200 for storage, and the wake-up device 200 may subsequently generate a corresponding ultrasonic wake-up instruction according to the frequency composition of the preset ultrasonic wake-up instruction for waking up/activating the intelligent voice device 100.
Specifically, the ultrasonic audio signal may be composed of 3 random prime number frequencies in a specific ultrasonic frequency band (e.g., 20,000Hz-21,000Hz), and 3 random frequency intensities and phases are agreed to form spectrum data with uniquely identifiable characteristics.
Referring to fig. 7, the signal generating unit 50 includes a signal generating module 51, and the signal generating module 51 is configured to randomly acquire prime numbers in a plurality of specific ultrasonic frequency bands, and assign frequency intensities and phases to obtain ultrasonic audio signals.
By randomly acquiring prime number frequencies in a plurality of specific ultrasonic frequency bands, the difference between preset ultrasonic wake-up instructions is ensured, and false identification is avoided. And a sound source positioning unit 70 for determining the sound source direction according to the received time-space information of the ultrasonic wave wake-up command.
And a sound source monitoring unit 80, configured to form a sound pickup beam in the sound source direction, monitor and receive a subsequent voice control instruction.
In this embodiment, after the matching is successful, the smart speech device 100 enters an "active" state, and can receive and recognize a speech instruction of the user. Meanwhile, the microphone array of the intelligent speech device 100 determines the direction of the sound source based on the information such as the time delay and the sound intensity of the ultrasonic signal collected by each microphone, detects the speech signal (human voice) in the direction of the sound source in real time, forms a pickup beam in the direction of the sound source, and waits for the user to pass a subsequent instruction.
In an embodiment, the signal generating unit 50, the signal presetting unit 60, the sound source positioning unit 70 and the sound source monitoring unit 80 may be integrated in the smart wake-up device, and perform corresponding functions in the smart wake-up device.
When the intelligent device is used, the voice assistant is firstly in a state of waiting to be activated after being started, and then monitors ultrasonic signals of an appointed frequency band (such as 20,000HZ-21,000HZ) in real time and filters a human voice frequency band so as to avoid personal privacy disclosure.
When the user directly or indirectly triggers the wake-up device 200 manually, the wake-up device 200 sends out an ultrasonic wake-up instruction according to the appointed frequency, amplitude and phase; after monitoring that an audio signal is monitored in an appointed frequency band, the intelligent voice device 100 converts a received ultrasonic awakening instruction signal from a time domain to a frequency domain, extracts frequency spectrum and phase information of the ultrasonic awakening instruction, compares the frequency spectrum and the phase information with a preset ultrasonic awakening instruction, and if the frequency spectrum and the phase information are matched, the intelligent voice device 100 enters an activated state; meanwhile, the microphone array of the intelligent speech device 100 judges the direction of the sound source based on the information of time delay, sound intensity and the like of the ultrasonic signal collected by each microphone, and forms a pickup beam in the direction of the sound source to wait for the user to pass a subsequent instruction.
The scheme can accurately wake up the voice assistant, simultaneously reserves the sound source direction information, and can be used for accurately positioning the user direction and receiving instructions after waking up; in addition, the intelligent voice device 100 collects ultrasonic audio in a standby state, so that real-time monitoring of user conversation during standby of the intelligent voice device 100 can be avoided, and user privacy is protected; in addition, the difficulty of the ultrasonic audio recognition technology is controllable, compared with the conventional method that the keyword recognition is waken up through a statistical model or a deep neural network model, the requirements on the performance and the memory capacity of a CPU (Central processing Unit) are greatly reduced, and the setting cost of the recognition device is reduced while the recognition precision and the recognition efficiency are ensured.
It should be noted that, as can be clearly understood by those skilled in the art, the above-mentioned device for waking up a voice assistant based on ultrasonic waves and the specific implementation process of each unit may refer to the corresponding description in the foregoing method embodiment, and for convenience and brevity of description, no further description is provided here.
Referring to fig. 8, fig. 5 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a terminal or a server, where the terminal may be an electronic device with a communication function, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The server may be an independent server or a server cluster composed of a plurality of servers.
Referring to fig. 8, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer programs 5032 include program instructions that, when executed, cause the processor 502 to perform a method of waking up a voice assistant based on ultrasound.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the operation of the computer program 5032 in the non-volatile storage medium 503, and when the computer program 5032 is executed by the processor 502, the processor 502 may be enabled to perform a method for waking up a voice assistant based on ultrasonic waves.
The network interface 505 is used for network communication with other devices. Those skilled in the art will appreciate that the configuration shown in fig. 8 is a block diagram of only a portion of the configuration relevant to the present teachings and does not constitute a limitation on the computer device 500 to which the present teachings may be applied, and that a particular computer device 500 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
Wherein the processor 502 is adapted to run a computer program 5032 stored in the memory.
It should be understood that in the embodiment of the present Application, the Processor 502 may be a Central Processing Unit (CPU), and the Processor 502 may also be other general-purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, and the like. Wherein a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be understood by those skilled in the art that all or part of the flow of the method implementing the above embodiments may be implemented by a computer program instructing associated hardware. The computer program includes program instructions, and the computer program may be stored in a storage medium, which is a computer-readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer-readable storage medium. The storage medium stores a computer program.
The storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, which can store various computer readable storage media.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be embodied in electronic hardware, computer software, or combinations of both, and that the components and steps of the examples have been described in a functional general in the foregoing description for the purpose of illustrating clearly the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be merged, divided and deleted according to actual needs. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a storage medium. Based on such understanding, the technical solution of the present invention essentially or partially contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a terminal, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to specific embodiments, the invention is not limited thereto, and various equivalent modifications and substitutions can be easily made by those skilled in the art within the technical scope of the invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Claims (10)
1. A method for waking up a voice assistant based on ultrasonic waves is characterized by comprising the following steps:
receiving an ultrasonic wave awakening instruction from awakening equipment;
analyzing the received ultrasonic wave awakening instruction to obtain a frequency spectrum analysis result;
comparing the frequency spectrum analysis result with the frequency spectrum of a preset ultrasonic wave awakening instruction, and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction;
and if the voice assistant is matched with the voice assistant, the voice assistant is awakened and activated, and the activation confirmation information is returned.
2. The method for waking up a voice assistant based on ultrasonic waves according to claim 1, wherein the step of receiving the ultrasonic wake-up command from the wake-up device is preceded by the steps of:
randomly generating an ultrasonic audio signal;
and sending the ultrasonic audio signal to a wake-up device as a preset ultrasonic wake-up instruction.
3. The method of claim 2, wherein the step of randomly generating an ultrasonic audio signal comprises:
and randomly acquiring prime number frequencies in a plurality of specific ultrasonic frequency bands, and appointing frequency intensity and phase to obtain ultrasonic audio signals.
4. The method for waking up a voice assistant based on ultrasonic waves as claimed in claim 1, wherein the step of analyzing the received ultrasonic wake-up command to obtain a spectrum analysis result comprises:
and converting the received ultrasonic awakening instruction from a time domain to a frequency domain, and extracting frequency spectrum and phase information of the ultrasonic awakening instruction.
5. The method for waking up a voice assistant based on ultrasonic waves as claimed in claim 1, wherein the step of waking up the voice assistant and returning the confirmation activation information is followed by:
determining the direction of a sound source according to the received space-time information of the ultrasonic wake-up instruction;
and forming a pickup beam in the sound source direction, and monitoring and receiving a subsequent voice control instruction.
6. An apparatus for waking up a voice assistant based on ultrasonic waves is characterized by comprising,
the instruction receiving unit is used for receiving an ultrasonic wake-up instruction from wake-up equipment;
the instruction analysis unit is used for analyzing the received ultrasonic wake-up instruction to obtain a frequency spectrum analysis result;
the instruction comparison unit is used for comparing the frequency spectrum analysis result with the frequency spectrum of a preset ultrasonic wave awakening instruction and judging whether the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic wave awakening instruction;
and the activation unit is used for awakening and activating the voice assistant and returning the activation confirmation information when the frequency spectrum analysis result is matched with the frequency spectrum of the preset ultrasonic awakening instruction.
7. The ultrasonic wake-up voice assistant-based apparatus of claim 6 further comprising:
the signal generating unit is used for randomly generating an ultrasonic audio signal;
and the signal presetting unit is used for sending the ultrasonic audio signal to the awakening equipment as a preset ultrasonic awakening instruction.
8. The ultrasonic wake-up voice assistant-based apparatus of claim 6 further comprising:
the sound source positioning unit is used for determining the direction of a sound source according to the received space-time information of the ultrasonic wave awakening instruction;
and the sound source monitoring unit is used for forming a pickup beam in the sound source direction, monitoring and receiving a subsequent voice control instruction.
9. A computer device, characterized in that the computer device comprises a memory and a processor, the memory stores a computer program, the processor when executing the computer program realizes the method for waking up a voice assistant based on ultrasonic waves according to any one of claims 1 to 5.
10. A storage medium storing a computer program which, when executed by a processor, implements the method for waking up a voice assistant based on ultrasound according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910858342.XA CN110600058A (en) | 2019-09-11 | 2019-09-11 | Method and device for awakening voice assistant based on ultrasonic waves, computer equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910858342.XA CN110600058A (en) | 2019-09-11 | 2019-09-11 | Method and device for awakening voice assistant based on ultrasonic waves, computer equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110600058A true CN110600058A (en) | 2019-12-20 |
Family
ID=68858760
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910858342.XA Pending CN110600058A (en) | 2019-09-11 | 2019-09-11 | Method and device for awakening voice assistant based on ultrasonic waves, computer equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110600058A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179933A (en) * | 2020-01-23 | 2020-05-19 | 珠海荣邦电子科技有限公司 | Voice control method and device and intelligent terminal |
CN111724783A (en) * | 2020-06-24 | 2020-09-29 | 北京小米移动软件有限公司 | Awakening method and device of intelligent equipment, intelligent equipment and medium |
CN113552568A (en) * | 2020-04-24 | 2021-10-26 | 深圳市万普拉斯科技有限公司 | Ultrasonic proximity sensing method, device, computer equipment and storage medium |
CN114745812A (en) * | 2022-04-07 | 2022-07-12 | 北京紫光展锐通信技术有限公司 | Wake-up method and related device |
WO2022156438A1 (en) * | 2021-01-20 | 2022-07-28 | 华为技术有限公司 | Wakeup method and electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101799545A (en) * | 2010-03-26 | 2010-08-11 | 北京物资学院 | Ultrasonic based dynamic distance measurement method and system |
CN104898939A (en) * | 2015-04-07 | 2015-09-09 | 联想(北京)有限公司 | Signal processing method and electronic device |
CN204965633U (en) * | 2015-05-08 | 2016-01-13 | 合肥君正科技有限公司 | Intelligent doorbell |
CN105916021A (en) * | 2015-12-15 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Audio and video identification method based on ultrasonic waves and audio and video identification system thereof |
CN106341728A (en) * | 2016-10-21 | 2017-01-18 | 北京巡声巡影科技服务有限公司 | Product information displaying method, apparatus and system in video |
CN106778179A (en) * | 2017-01-05 | 2017-05-31 | 南京大学 | A kind of identity identifying method based on the identification of ultrasonic wave lip reading |
CN106797507A (en) * | 2014-10-02 | 2017-05-31 | 美商楼氏电子有限公司 | Low-power acoustic apparatus and operating method |
-
2019
- 2019-09-11 CN CN201910858342.XA patent/CN110600058A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101799545A (en) * | 2010-03-26 | 2010-08-11 | 北京物资学院 | Ultrasonic based dynamic distance measurement method and system |
CN106797507A (en) * | 2014-10-02 | 2017-05-31 | 美商楼氏电子有限公司 | Low-power acoustic apparatus and operating method |
CN104898939A (en) * | 2015-04-07 | 2015-09-09 | 联想(北京)有限公司 | Signal processing method and electronic device |
CN204965633U (en) * | 2015-05-08 | 2016-01-13 | 合肥君正科技有限公司 | Intelligent doorbell |
CN105916021A (en) * | 2015-12-15 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | Audio and video identification method based on ultrasonic waves and audio and video identification system thereof |
CN106341728A (en) * | 2016-10-21 | 2017-01-18 | 北京巡声巡影科技服务有限公司 | Product information displaying method, apparatus and system in video |
CN106778179A (en) * | 2017-01-05 | 2017-05-31 | 南京大学 | A kind of identity identifying method based on the identification of ultrasonic wave lip reading |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111179933A (en) * | 2020-01-23 | 2020-05-19 | 珠海荣邦电子科技有限公司 | Voice control method and device and intelligent terminal |
CN113552568A (en) * | 2020-04-24 | 2021-10-26 | 深圳市万普拉斯科技有限公司 | Ultrasonic proximity sensing method, device, computer equipment and storage medium |
CN113552568B (en) * | 2020-04-24 | 2024-03-22 | 深圳市万普拉斯科技有限公司 | Ultrasonic proximity sensing method, device, computer equipment and storage medium |
CN111724783A (en) * | 2020-06-24 | 2020-09-29 | 北京小米移动软件有限公司 | Awakening method and device of intelligent equipment, intelligent equipment and medium |
CN111724783B (en) * | 2020-06-24 | 2023-10-17 | 北京小米移动软件有限公司 | Method and device for waking up intelligent device, intelligent device and medium |
WO2022156438A1 (en) * | 2021-01-20 | 2022-07-28 | 华为技术有限公司 | Wakeup method and electronic device |
CN114745812A (en) * | 2022-04-07 | 2022-07-12 | 北京紫光展锐通信技术有限公司 | Wake-up method and related device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110600058A (en) | Method and device for awakening voice assistant based on ultrasonic waves, computer equipment and storage medium | |
CN111566730B (en) | Voice command processing in low power devices | |
CN107403621B (en) | Voice wake-up device and method | |
CN108551686B (en) | Extraction and analysis of audio feature data | |
US11605372B2 (en) | Time-based frequency tuning of analog-to-information feature extraction | |
CN108665895B (en) | Method, device and system for processing information | |
WO2020038010A1 (en) | Intelligent device, voice wake-up method, voice wake-up apparatus, and storage medium | |
EP3047481A1 (en) | Local and remote speech processing | |
CN109844857B (en) | Portable audio device with voice capability | |
US20190147890A1 (en) | Audio peripheral device | |
CN109272991B (en) | Voice interaction method, device, equipment and computer-readable storage medium | |
CN104282307A (en) | Method, device and terminal for awakening voice control system | |
US20180174574A1 (en) | Methods and systems for reducing false alarms in keyword detection | |
US20170178627A1 (en) | Environmental noise detection for dialog systems | |
CN110175016A (en) | Start the method for voice assistant and the electronic device with voice assistant | |
CN110968353A (en) | Central processing unit awakening method and device, voice processor and user equipment | |
CN113963695A (en) | Awakening method, awakening device, equipment and storage medium of intelligent equipment | |
CN112951243A (en) | Voice awakening method, device, chip, electronic equipment and storage medium | |
CN110956968A (en) | Voice wake-up and voice wake-up function triggering method and device, and terminal equipment | |
CN110933345A (en) | Method for reducing television standby power consumption, television and storage medium | |
CN113077798A (en) | Old man calls for help equipment at home | |
CN108962259B (en) | Processing method and first electronic device | |
CN116705033A (en) | System on chip for wireless intelligent audio equipment and wireless processing method | |
CN104049707B (en) | Always-on low-power keyword detection | |
US20160163313A1 (en) | Information processing method and electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20191220 |