WO2022233308A9 - 佩戴检测方法、可穿戴设备及存储介质 - Google Patents
佩戴检测方法、可穿戴设备及存储介质 Download PDFInfo
- Publication number
- WO2022233308A9 WO2022233308A9 PCT/CN2022/091059 CN2022091059W WO2022233308A9 WO 2022233308 A9 WO2022233308 A9 WO 2022233308A9 CN 2022091059 W CN2022091059 W CN 2022091059W WO 2022233308 A9 WO2022233308 A9 WO 2022233308A9
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio signal
- signal
- wearable device
- microphone
- frequency
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 162
- 230000005236 sound signal Effects 0.000 claims abstract description 497
- 238000000034 method Methods 0.000 claims abstract description 59
- 230000006870 function Effects 0.000 claims description 149
- 238000012546 transfer Methods 0.000 claims description 125
- 230000004044 response Effects 0.000 claims description 77
- 230000003595 spectral effect Effects 0.000 claims description 24
- 230000009466 transformation Effects 0.000 claims description 17
- 238000013459 approach Methods 0.000 claims description 6
- 230000001953 sensory effect Effects 0.000 claims description 4
- 238000004458 analytical method Methods 0.000 abstract description 13
- 230000001960 triggered effect Effects 0.000 abstract description 5
- 239000002699 waste material Substances 0.000 abstract description 3
- 238000001228 spectrum Methods 0.000 description 29
- 238000005070 sampling Methods 0.000 description 18
- 238000012545 processing Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 14
- 238000004891 communication Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 12
- 239000000284 extract Substances 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 8
- 230000009471 action Effects 0.000 description 6
- 238000013461 design Methods 0.000 description 5
- 230000001133 acceleration Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000005314 correlation function Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 230000006399 behavior Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000000126 substance Substances 0.000 description 2
- 210000003454 tympanic membrane Anatomy 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 210000005069 ears Anatomy 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/16—Sound input; Sound output
- G06F3/162—Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1041—Mechanical or electronic switches, or control elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1016—Earpieces of the intra-aural type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/10—Earpieces; Attachments therefor ; Earphones; Monophonic headphones
- H04R1/1091—Details not provided for in groups H04R1/1008 - H04R1/1083
Definitions
- the present application relates to the field of terminal control, in particular to a wearing detection method and related equipment.
- terminal devices such as mobile phones and computers
- peripheral wearable devices connected to terminal devices.
- earphones when users wear them many times, they may forget to turn off the earphones or forget to stop audio playback, which will cause the earphones and terminal devices to be in the audio playback state all the time, resulting in fast power consumption and the need for frequent charging. feel bad.
- the user also needs to manually control the playback status of the audio, which is very inconvenient.
- the wearing detection function can automatically identify the wearing state of the wearable device, so that the wearable device can perform appropriate operations according to the wearing state. Still taking the earphone as an example, after judging that the earphone is in the ear, the earphone turns on the audio output and audio receiving functions, and after judging that the earphone is out of the ear, the earphone turns off the audio output and audio receiving functions. In this way, the power consumption of the headset and the terminal device can be reduced, saving power, and also improving the intelligent interaction between the user and the headset. Therefore, the wearing detection function has gradually become one of the necessary functions of earphone products.
- the embodiment of the present application provides a wearing detection method applied to a wearable device.
- the wearing detection is performed by using the audio signal output by the speaker in the wearable device and the audio signal received by the microphone to determine the wearing state of the wearable device, realizing a more Accurate wearing detection effectively reduces the false recognition rate of wearing detection, so that further actions (such as playing audio, etc.) can be determined according to the wearing state, which improves the user experience.
- the second audio signal may be an infrasound signal or an audible domain signal. Due to the sound produced by the speaker, the microphone of the wearable device will correspondingly collect the third audio signal. Since the signal characteristics of the audio signal collected by the microphone will be different when the wearable device is in the wearing state and the non-wearing state, the wearable device can be determined according to the signal characteristics of the third audio signal collected each time. wearing status.
- the existing microphone 130 and speaker 140 in the wearable device can be used to judge the wearing state of the wearable device by outputting the second audio signal and analyzing the characteristics of the corresponding input third audio signal without relying on Additional sensors, that is to say, wearable devices do not need to set additional special sensors for wearing detection, which can reduce the stacking of sensors, reduce product costs, and make product designs smaller, lighter and more flexible.
- the wearing detection is only triggered when the first audio signal is detected, which can avoid continuous detection, thereby reducing the waste of power consumption caused by the continuous output of the second audio signal by the speaker and the continuous signal processing and analysis of the processor.
- the wearable device when the wearable device does not play audio and acquires the first audio signal collected by the microphone, it may first be determined that the signal feature of the first audio signal satisfies the wearing detection entry condition.
- the wearing detection entry condition is set based on the regularity of the audio signal generated by the user touching the wearable device when wearing the wearable device and when taking off the wearable device.
- the wearable device by matching the first audio signal with the entry condition of wearing detection, it can be judged whether the first audio signal conforms to the characteristics of the user when wearing the wearable device or when taking off the wearable device. Only when the entry conditions for wearing detection are met, it means that the wearable device may be wearing or taking off the action. At this time, the wearing state of the wearable device may change, and then further perform the wearing detection mentioned in the above implementation. The method is retested twice, which can make the detection results more accurate.
- the wearable device may only collect some noise due to other situations such as user's accidental touch, and it is not a change in the wearing state, so there is no need to mobilize resources to execute the output of the second audio signal, etc. steps, thereby saving power consumption.
- determining whether the signal feature of the first audio signal satisfies the entry condition for wearing detection may be obtained by first obtaining the spectral feature of the first audio signal in the first frequency interval, and then determining that the first audio signal is in the first The first cross-correlation coefficient between the spectral feature in the frequency range and the first spectral feature, when the first cross-correlation coefficient reaches the first correlation threshold, it is determined that the signal feature of the first audio signal satisfies the wearing detection entry condition.
- the first frequency interval can be preset according to the frequency band with obvious characteristics and good discrimination in the audio signal generated by the wearable device when wearing and taking off the action, and the first frequency spectrum feature is based on a large number of wearing and taking off The consistency law exhibited by the generated audio signal is preset.
- the spectral characteristics of the first audio signal in the first frequency interval are compared with the preset second
- the matching of a frequency spectrum feature can determine whether the signal feature of the first audio signal satisfies the entry condition of wearing detection.
- the first frequency range includes 20-300 Hz or any sub-range in 20-300 Hz. It may also be a range wider than 20 to 300 Hz or a sub-range therein.
- determining whether the signal feature of the first audio signal satisfies the entry condition for wearing detection can be performed by first extracting the time-domain envelope of the first audio signal, and then determining the difference between the time-domain envelope of the first audio signal and The second cross-correlation coefficient of the first time-domain envelope, when the second cross-correlation coefficient reaches the second correlation threshold, it is determined that the signal feature of the first audio signal satisfies the wearing detection entry condition.
- the time domain envelope of the first audio signal is compared with the preset first time domain
- the matching of the envelope can determine whether the signal feature of the first audio signal satisfies the entry condition of wearing detection.
- the signal feature of the first audio signal satisfies the entry condition of wearing detection, it can be analyzed in the above-mentioned time domain dimension and frequency domain dimension at the same time, and the result obtained in any dimension meets the condition, then determine the first The signal characteristics of the audio signal meet the entry condition for wearing detection.
- the signal amplitude of the first audio signal reaches the first signal amplitude threshold.
- the signal amplitude of the first audio signal reaches the first signal amplitude threshold in the following ways: determining that the effective value of the first audio signal reaches the first amplitude threshold; or, determining that the first audio signal reaches the first amplitude threshold; The average amplitude of an audio signal reaches a second amplitude threshold; or, it is determined that the maximum amplitude of the first audio signal reaches a third amplitude threshold.
- frequency domain transformation may be performed on the third audio signal to obtain the frequency domain characteristics of the third audio signal in the second frequency interval, and then according to the third audio signal in the second frequency interval
- the frequency domain characteristics of the second frequency interval and the first frequency domain characteristics are used to determine the wearing state of the wearable device.
- the second frequency interval is preset according to whether the second audio signal is an infrasound signal or an audible domain signal, and is a frequency band capable of characterizing the obvious characteristics of the audio signal collected in the wearing state and the non-wearing state.
- the first frequency domain feature is preset according to the law presented by a large number of audio signals collected in the wearing and non-wearing states.
- the first frequency domain feature includes a third signal amplitude threshold, and it can be determined according to the comparison between the maximum amplitude of the frequency response of the third audio signal in the second frequency range and the third signal amplitude threshold. Wearing state of the wearable device.
- the first frequency domain feature includes a first frequency response
- a third correlation coefficient between the frequency response of the third audio signal in the second frequency interval and the first frequency response can be determined, according to the third correlation coefficient
- the comparison of the correlation coefficient with the third correlation threshold determines the wearing state of the wearable device.
- the second audio signal is an infrasound signal with a frequency range less than 20 Hz, and the second frequency range includes 0-20 Hz or any sub-range of 0-20 Hz; or, the second audio signal is a frequency range In the audible domain signal of 20-20 KHz, the second frequency range includes 20-300 Hz or any sub-range of 20-300 Hz.
- the second audio signal may be an infrasound signal or an audible domain signal.
- the infrasound signal allows the user to carry out wearing detection without the user's perception, and the audible domain signal can enhance the user's sense of intelligent interaction.
- the wearable device when it is determined that the wearable device is playing audio, the fifth audio signal collected by the microphone is obtained, and the transmission between the microphone and the speaker is determined according to the fourth audio signal and the fifth audio signal being played
- the function obtains the signal characteristics of the transfer function, and determines the wearing state of the wearable device according to the signal characteristics of the transfer function. Since the fourth audio signal is a random unknown signal, it is necessary to determine the transfer function according to the fourth audio signal and the fifth audio signal. Since the signal characteristics of the transfer function of the wearable device are different in the wearing state and the non-wearing state, the wearing state of the wearable device can be determined according to the signal characteristics of the transfer function collected each time.
- the transfer function is directly calculated by using the audio being played, without relying on sensors, and the design is more flexible. There is also no need to output the second audio signal, which reduces unnecessary power consumption.
- the frequency domain characteristics of the transfer function in the third frequency interval are obtained by performing frequency domain transformation on the transfer function, and according to the frequency domain characteristics of the transfer function in the third frequency interval and the first Two frequency domain features to determine the wearing state of the wearable device.
- the third frequency interval is preset according to whether the second audio signal is an infrasound signal or an audible domain signal, and is a frequency band capable of characterizing the obvious characteristics of the transfer function obtained in the wearing and unwearing states.
- the second frequency domain feature is preset according to the law presented by a large number of transfer functions acquired in the wearing and non-wearing states.
- the second frequency domain feature includes a transfer function amplitude threshold
- the wearable device can be determined based on a comparison between the maximum amplitude of the frequency response of the transfer function in the third frequency range and the transfer function amplitude threshold. state.
- the second frequency domain feature includes a second frequency response
- a fourth cross-correlation coefficient between the frequency response of the transfer function in the third frequency interval and the second frequency response can be determined, according to the fourth cross-correlation coefficient The comparison with the fourth correlation threshold determines the wearing state of the wearable device.
- the present application provides a wearing detection method, which is applied to a wearable device, and the wearable device includes a microphone and a speaker.
- the wearable device When the wearable device is not playing audio and the first audio signal collected by the microphone is obtained, the second audio signal is output through the speaker and the third audio signal collected by the microphone is obtained, and the microphone and the third audio signal are determined according to the second audio signal and the third audio signal.
- the transfer function between the speakers, the signal characteristics of the transfer function are obtained, and the wearing state of the wearable device is determined according to the signal characteristics of the transfer function.
- the existing microphone 130 and speaker 140 in the wearable device can be used to calculate the transfer function of the output second audio signal and the input third audio signal and perform feature analysis to determine whether the wearable device is worn. state, without relying on additional sensors, that is to say, there is no need to set up special sensors for wearing detection in wearable devices, which can reduce the stacking of sensors, reduce product costs, and make product designs smaller, lighter and more flexible. At the same time, the wearing detection is triggered only when the first audio signal is detected, which can avoid the waste of power consumption caused by continuous detection.
- the present application provides a wearing detection method, which is applied to a wearable device, and the wearable device includes a microphone, a speaker and a sensor.
- the wearable device does not play audio and it is determined that the sensor data collected by the sensor meets the entry condition for wearing detection, output the first audio signal through the speaker and obtain the second audio signal collected by the microphone, and then obtain the second audio signal
- the signal feature of the signal is to determine the wearing state of the wearable device according to the signal feature of the second audio signal.
- the wearing detection entry condition is set based on the regularity of the sensing data collected by the sensor when the user wears the wearable device and takes off the wearable device.
- the sensor when the user wears or takes off the wearable device, the sensor will collect the sensing data, and by matching the sensing data with the entry conditions of the wearing detection, it can be judged whether the sensing data meets the requirements of the user.
- the wearable device Only when the entry conditions for wearing detection are met, it means that the wearable device may be wearing or taking off the action.
- the wearing state of the wearable device may change, and then further by outputting the first audio signal and corresponding to the input
- the characteristic analysis of the second audio signal is carried out, and the wearing state of the wearable device is judged for a second re-inspection, which can make the detection result more accurate.
- the wearable device may only collect some sensory data caused by other actions, not a change in the wearing state, then there is no need to mobilize resources to perform steps such as outputting the second audio signal , thus saving power consumption.
- the senor includes a proximity sensor. According to the sensing data collected by the proximity sensor, it is determined that an object approaches or moves away from the wearable device, and then it is determined that the sensing data collected by the sensor satisfies the entry condition for wearing detection.
- the present application provides a wearing detection method, which is applied to a wearable device, and the wearable device includes a microphone, a speaker, and a sensor.
- the wearable device does not play audio and it is determined that the sensor data collected by the sensor meets the entry condition for wearing detection, output the first audio signal through the speaker and acquire the second audio signal collected by the microphone, according to the first audio signal and the second audio signal, determine the transfer function between the microphone and the speaker, acquire the signal characteristics of the transfer function, and determine the wearing state of the wearable device according to the signal characteristics of the transfer function.
- the present application provides a wearable device, including a microphone, a speaker, a memory, and a processor, wherein the microphone is used to receive a sound signal and convert it into an audio signal, and the speaker is used to convert the audio signal into a sound signal for output,
- the memory is used to store computer-readable instructions (or called computer programs), and when the computer-readable instructions are executed by the processor, the method provided by any implementation manner in the first aspect and the second aspect above is implemented.
- the microphone includes a feedback microphone.
- the present application provides a wearable device, including a microphone, a speaker, a sensor, a memory, and a processor, wherein the microphone is used to receive a sound signal and convert it into an audio signal, and the speaker is used to convert the audio signal into a sound signal Output, the sensor is used to collect sensory data, and the memory is used to store computer-readable instructions (or called computer programs), and when the computer-readable instructions are executed by the processor, any implementation in the third aspect and the fourth aspect above is realized method provided.
- the microphone includes a feedback microphone.
- the present application provides a computer storage medium, and the computer storage medium may be non-volatile.
- Computer-readable instructions are stored in the computer storage medium, and when the computer-readable instructions are executed by a processor, the method provided by any implementation manner of the first aspect to the fourth aspect above is implemented.
- the present application provides a computer program product, the computer program product includes computer-readable instructions, and when the computer-readable instructions are executed by a processor, the implementation methods provided by any of the above-mentioned first to fourth aspects are realized. method.
- FIG. 1 is a schematic diagram of an application scenario of a wearing detection method provided by an embodiment of the present application
- FIG. 2A is a schematic structural diagram of a wireless earphone provided by an embodiment of the present application.
- FIG. 2B is a schematic structural diagram of another wireless earphone provided by the embodiment of the present application.
- FIG. 2C is a schematic diagram of a paired wireless earphone provided by the embodiment of the present application.
- FIG. 3 is a schematic flow chart of a wearing detection method provided in an embodiment of the present application.
- Fig. 4 is a schematic flow chart of another wearing detection method provided by the embodiment of the present application.
- FIG. 5 is a schematic diagram of time window interception of an audio signal provided by an embodiment of the present application.
- FIG. 6 is an example diagram of the frequency response of a transfer function in an ear-in state and an ear-out state provided by an embodiment of the present application;
- Fig. 7 is an exemplary diagram of the frequency response of a third audio signal in an ear-in state and an ear-out state provided by an embodiment of the present application;
- FIG. 8 is a schematic flowchart of another wearing detection method provided by the embodiment of the present application.
- FIG. 9 is an example spectrum diagram of an audio signal collected by a microphone when the earphone is in the ear in various usage environments provided by the embodiment of the present application;
- FIG. 10 is an exemplary spectrum diagram of various types of audio signals received by a microphone provided in an embodiment of the present application.
- FIG. 11 is an example diagram of a time-domain envelope provided by an embodiment of the present application.
- FIG. 12 is an example diagram of an audio signal collected by a microphone in a different scenario provided by an embodiment of the present application.
- Fig. 13 is a schematic flowchart of another wearing detection method provided by the embodiment of the present application.
- a connection can be established between a wearable device 10 (a wireless earphone is shown in the figure) and a terminal device 20 (a smart phone 201 , a notebook computer 202 , and a tablet computer 203 are shown in FIG. 1 ) for communication.
- a wearable device 10 a wireless earphone is shown in the figure
- a terminal device 20 a smart phone 201 , a notebook computer 202 , and a tablet computer 203 are shown in FIG. 1
- it may be a wired or wireless connection and other connection methods.
- the pairing between the wearable device and the terminal device may be realized through a Bluetooth connection, so as to realize communication between the two.
- the terminal device can control the wearable device, and the wearing status of the wearable device can also affect some operation behaviors of the terminal device. For example, if the wearable device is a wireless headset, the terminal device can control the audio played by the wireless headset and what audio to play; at the same time, the wearing status of the wireless headset can also affect the time when the terminal device triggers the audio playback operation. Specifically, it can be determined that the wireless headset is worn When in the human ear, triggers audio playback.
- the wearing state in the embodiment of the present application may include two states: being worn and not being worn.
- the being worn state may indicate that the wearable device is currently worn by the user, and the unworn state may indicate that the wearable device is currently detached from the user.
- the wearing state can also indicate that the wearable device is currently worn on a specific part of the user's body, and the unworn state can indicate that the wearable device is currently detached from a certain part of the user's body .
- the wearing state indicates that the earphone is in an ear-in state (also called on-ear), and the unworn state indicates that the earphone is in an ear-off state (also called off-ear).
- the in-ear state can specifically refer to the position where the earphone is close to the human ear or the tympanic membrane when wearing the earphone;
- the off-ear state can specifically refer to the position where the earphone is not close to the human ear or the tympanic membrane when wearing the earphone. position, or away from where you need to be while wearing the headset.
- the wearable device is a watch
- the wearing state indicates that the watch is near the wrist or arm of the human body
- the unworn state indicates that the watch is away from the wrist or arm of the human body.
- an embodiment of the present application provides a wearing detection method applied to a wearable device, aiming to more accurately identify the wearing state of the wearable device through this method, so that the terminal device can better detect the wearing status of the wearable device. Take control.
- the wearable device of the embodiment of the present application may include glasses, sunglasses, earphones, watches, bracelets, etc., and they include a processor for detecting the wearing state of the wearable device, a microphone, a speaker, and a device for transmitting instructions to the connected terminal device. or information communication module.
- the wearing detection method of each wearable device is similar, so in the following, the embodiment of the present application will take the earphone as an example to introduce the wearing detection method of the present application, where the earphone can be a wired earphone or a wireless earphone, or it can be It is a headset or an earphone, and the embodiment of the present application mainly uses a wireless earphone as an example for introduction.
- the wearing detection solution provided in the embodiment of the present application can be applied to and not limited to various wearable devices mentioned above.
- FIG. 2A and FIG. 2B are schematic structural diagrams of a wireless earphone according to an embodiment of the present application, which are collectively referred to as the wireless earphone 100 hereinafter.
- wireless headset 100 may have more or fewer components than shown in the figures, two or more components may be combined, or may have a different configuration of components.
- the various components shown in FIG. 1 may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
- the wireless headset 100 may include: a processor 110, a memory 120, a microphone 130, a speaker 140, an audio module 150, a communication module 160, a sensor 170, a battery 180, and the like.
- the processor 110 may include one or more processing units, for example, the processor 110 may include a controller, a digital signal processor (digital signal processor, DSP), and the like. Wherein, different processing units may be independent devices, or may be integrated in one or more processors. Wherein, the controller may be the nerve center and command center of the wireless headset 100 . The controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction. DSP is used to process various digital signals on audio digital signals.
- DSP digital signal processor
- the memory 120 may be used to store computer-executable program code, which includes instructions.
- the processor 110 executes the instructions stored in the memory 120 to execute various functional applications and data processing of the wireless earphone 100 .
- the microphone 130 also called “microphone” or “microphone”, is used to convert sound signals into audio signals (electrical signals).
- the microphone 130 may include a feedback microphone 131 (feedback microphone, Fb-mic), a feedforward microphone 132 (feedforwad microphone, FF-mic) and a call microphone 133 .
- the feedback microphone 131 is used to receive the noise signal inside the ear
- the feedforward microphone 132 is used to receive the noise signal outside the ear
- the call microphone 133 is used to receive the voice signal of the user during a call.
- the speaker 140 also called “horn” is used to convert audio electrical signals into sound signals.
- the audio module 150 is connected with the microphone 130, the loudspeaker 140 and the processor 110, and is used for converting the digital audio signal processed by the processor 110 into an analog audio signal output to the loudspeaker 140, and also for converting the analog audio signal input by the microphone 130 into The digital audio signal is output to the processor 110 .
- the communication module 160 is used to provide the wireless headset 100 with the ability to communicate with external devices, and may include radio frequency transceiver circuits and antennas supporting various wireless connection modes. Specifically, it may be a bluetooth module for pairing with a terminal device.
- the sensor 170 may include an acceleration sensor 171 . Further, in some scenarios, the sensor 170 may also include a proximity light sensor 172 or a capacitive proximity sensor 173 for detecting whether a specific substance approaches.
- the battery 180 is used to connect the processor 110, the memory 120, the microphone 130, the speaker 140, the audio module 150, the communication module 160, the sensor 170, etc., and provide power for the above components.
- the wireless earphones 100 can also appear in pairs, wherein the two wireless earphones 100 can both be provided with the modules of the wireless earphones 100 shown in FIG. 2A and FIG. 2B and implement Functions corresponding to the above modules.
- Fig. 3 and Fig. 4 are schematic flowcharts of two wearing detection methods provided by the embodiment of the present application.
- the processor 110 uses the existing microphone 130 and speaker 140 in the wireless earphone 100 to determine whether the earphone is in the ear or out of the ear by analyzing the characteristics of the input and output audio signals. , no need to rely on additional sensors, that is to say, there is no need to set additional special sensors in the wireless earphone 100 for wearing detection, which can reduce the stacking of sensors, reduce product costs, and make the product design smaller, lighter and more flexible .
- the processor executes S101, S1021 and S1022 when it determines that the wearable device is not playing audio; the implementation of Fig. 4 In this example, when the processor determines that the wearable device is not playing audio, S101 and S1023 are executed.
- the wearing detection method shown in FIG. 3 and FIG. 4 will be introduced in detail below.
- the processor 110 determines whether the wearable device is playing audio, if it is playing audio, execute S103-S105; if it is not playing audio, execute S101, S1021 and S1022 as shown in Figure 3, or execute S101 as shown in Figure 4 and S1023.
- the processor 110 can adopt two different processing methods for the wireless earphone 100 in the two states of playing audio and not playing audio. Therefore, the processor 110 can first determine whether the wireless earphone 100 is playing audio, Determine the corresponding processing steps according to the specific state. Since the processor 110 is the control center of the wireless earphone 100, the current various states of the wireless earphone 100, including whether to play audio, what audio to play, etc., the processor 110 can directly determine, and when the wireless earphone 100 is not playing audio, Execute steps S101, S1022 and S1023, and execute steps S103-S105 when the wireless earphone 100 is playing audio.
- the microphone 130 involved in the embodiment of the present application may be a feedback microphone 131 (Fb-mic) that is relatively close to the human ear in an in-ear state.
- Fb-mic feedback microphone 131
- the following uses the Fb-mic 131 as an example for specific description.
- the contact between the earphones and the human ear will produce sound in an instant.
- Fb-mic131 will receive the sound signal generated by the contact and convert it
- the first audio signal is sent to the processor 110.
- the processor 110 acquires the first audio signal, it outputs the second audio signal through the speaker 140, that is, drives the speaker 140 to convert the second audio signal into a sound signal for playback.
- the second audio signal may be a section of audio signal preset in the processor 110, and the processor 110 calls the second audio signal to play after acquiring the first audio signal.
- the second audio signal can be set as an infrasound signal, that is, an audio signal with a frequency range of less than 20 Hz. Since the frequency of the infrasound signal is in a range that cannot be perceived by the human ear, the use of infrasound quotation marks can prevent the user from perceiving In the case of wear detection.
- the second audio signal can also be set as an audible domain signal that can be perceived by the human ear, that is, an audio signal with a frequency range of 20-20 KHz, which can improve the interaction between the user and the earphone.
- the second audio signal can be a single-frequency signal (such as 5Hz, 100Hz), or a signal of a frequency range (such as 5-10Hz, 100-1000Hz), which can be set according to the actual situation, and there is no specific limitation in this application.
- the Fb-mic 131 When the processor 110 drives the speaker 140 to output the second audio signal, due to the generation of the sound signal, the Fb-mic 131 will immediately collect the third audio signal. To put it simply, at the moment when the speaker 140 sends out the sound signal, the Fb-mic 131 will also collect the sound signal, convert the sound signal into a third audio signal and send it to the processor 110 .
- the first audio signal collected by the microphone 130 may be a section of signal including multiple frames. Intercept to obtain a frame signal in each time window and process it (signals intercepted by 4 time windows are shown in Figure 5), the processing steps are the same for any audio, so the embodiment of the present application does not perform
- the second audio signal, the third audio signal, the fourth audio signal and the fifth audio signal in the following text can also be intercepted and processed through the sliding time window, and will not be described in detail below.
- the microphone 130 will also receive the first audio signal, thereby triggering the processor 110 to output the second audio signal through the speaker 140 and acquire the third audio signal collected by the microphone 130 .
- the processor 110 will also receive the first audio signal, thereby triggering the processor 110 to output the second audio signal through the speaker 140 and acquire the third audio signal collected by the microphone 130 .
- the processor 110 will not be generated.
- the processor 110 when the processor 110 acquires the first audio signal, it may first determine whether the signal amplitude of the first audio signal reaches the first signal amplitude threshold, and if the first signal amplitude threshold is met, further perform subsequent steps, otherwise No further steps are performed. This can further improve wearing detection efficiency and save power consumption.
- the effective value of the first audio signal that is, the root mean square of the amplitude
- the subsequent steps are performed; otherwise, the subsequent steps are not performed.
- the signal amplitude threshold is reached according to the average amplitude or the maximum amplitude of the first audio signal, which is not specifically limited in this embodiment of the present application.
- the signal amplitude can be measured by the signal voltage.
- the processor 110 may execute S1021-S1022, and this technical solution will be described below first.
- the processor 110 determines a first transfer function between the microphone 130 and the speaker 140 according to the second audio signal and the third audio signal.
- the transfer function is a function used to represent the relationship between the input and output of an object, specifically, it can be a function obtained by the ratio of the output to the input.
- the output is equivalent to the third audio signal (the audio signal collected by Fb-mic131), and the input is equivalent to the second audio signal (the audio signal output by the speaker 140). Therefore, the third audio signal and the first audio signal can be combined
- the ratio of the two audio signals is the first transfer function.
- the ratio of the third audio signal and the second audio signal can be directly calculated according to the time domain signal of the two, or some frequency domain transformation can be performed on the third audio signal and the second audio signal first, and the ratio of the two can be calculated Compare.
- the first transfer function can be calculated by the ratio of the Laplace transform of the third audio signal to the Laplace transform of the second audio signal, or can be calculated according to the Fourier transform of the third audio signal and the second The ratio of the Fourier transform of the audio signal to calculate the first transfer function.
- the ratio of the second audio signal to the third audio signal can be used as the first transfer function, and there are many ways to calculate the first transfer function, which is not limited in this application, as long as the third audio signal and the second audio signal can be characterized
- the function of the relationship between signals may be the first transfer function referred to in the embodiment of the present application.
- the processor 110 acquires a signal feature of the first transfer function, and determines a wearing state of the wearable device according to the signal feature of the first transfer function.
- the earphone is in a relatively closed space in the ear-in state, and is in an open space in the ear-out state. Therefore, when the speaker 140 outputs the same second audio signal, the wireless earphone 100 is in the ear-in state.
- Fb- The third audio signal collected by the mic131 is different from the third audio signal collected by the Fb-mic131 when the wireless earphone 100 is in the ear-out state. Therefore, the first transfer function of the wireless earphone 100 in the ear-in state and the ear-out state will also be different.
- the embodiment shown in FIG. 3 of the present application determines the wearing state of the wearable device by analyzing the characteristics of the first transfer function obtained in S1021.
- the signal features may include various parameters, functions, or graphs that characterize the signal features, and specifically include various time-domain features and frequency-domain features obtained through time-domain and frequency-domain analysis.
- the processor 110 may extract the frequency-domain features of the first transfer function, where the frequency-domain features may include frequency-domain transformed characteristic curves such as frequency response, energy spectrum, and power spectrum, or may include Further extracted features such as amplitude, energy value, and power value.
- the processor 110 compares the extracted frequency-domain features with the frequency-domain features preset in the processor 110 to determine whether the wireless earphone 100 is in the ear-in state or the ear-out state.
- the frequency domain characteristics preset in the processor 110 are set based on the characteristics of the frequency domain characteristics of the wireless earphone 100 in the ear-in state and the ear-out state. Therefore, the frequency domain characteristics of the first transfer function and the preset After the frequency domain features are compared, the wearing state of the wireless earphone 100 can be determined.
- the frequency domain feature of the first transfer function includes a frequency response of the first transfer function in the third frequency range.
- the processor 110 may perform frequency domain transformation (for example, Fourier transform) on the first transfer function to obtain a frequency response corresponding to the first transfer function, and then extract frequency domain features corresponding to a third frequency interval in the frequency response, and then according to the first
- frequency domain transformation for example, Fourier transform
- the third frequency interval is set corresponding to the frequency interval of the second audio signal.
- the first audio signal is an infrasound signal
- the spectral characteristics of the first transfer function will appear relatively stable and obvious at its corresponding frequency. Therefore, when the second audio signal is an infrasound signal, the first The three frequency intervals may be a frequency interval centered on the single frequency or frequency interval corresponding to the second audio signal.
- the frequency of the second audio signal is 5Hz
- the third frequency interval can be a frequency range near 5Hz such as 4-6Hz, 3-6Hz or 2-10Hz; the frequency of the second audio signal is 5-10Hz , then the third frequency range may be 4-11 Hz, 3-12 Hz, 2-15 Hz and so on.
- the second frequency domain feature is preset in the processor 110 and is set according to the law and characteristics of the frequency response of the transfer function of the wireless earphone 100 in the ear-in state and the ear-out state for judging the wearing state. Specifically, the second frequency domain feature is also set based on the law and characteristics of the frequency where the second audio signal is located.
- Fig. 6 shows a frequency response of a transfer function in an ear-in state and an ear-out state when the second audio signal is an audible domain signal. It can be seen that in the low frequency band, the degree of distinction between the two is more obvious. Utilizing this characteristic, the second frequency domain characteristic can be set according to the frequency domain characteristics of the transfer function in the ear-in state and the ear-out state.
- the second frequency domain characteristic can be a similar frequency response curve simulated between the frequency response curves of the transfer function of the in-ear state and the ear-out state; the second frequency domain characteristic can also be based on two frequency response curves A first transfer function amplitude threshold is set between the maximum amplitudes; the second frequency domain feature can also be a plurality of sampling amplitudes obtained after sampling according to the simulated frequency response curve.
- the second frequency domain feature can also be set in other ways with reference to the frequency response of the transfer function in the ear-in state and the ear-out state, which is not specifically limited in this application.
- the third frequency interval will also adapt to the setting of frequency characteristics of the infrasound signal, and the second frequency domain feature will also be based on the infrasound signal after it is transmitted.
- the characteristics of the transfer function in the ear-in state and the ear-out state are set; similarly, if the second audio signal preset by the processor 110 is an audible domain signal, the third frequency range will also adapt to the frequency characteristics of the audible domain signal If it is set in the low frequency band, the second frequency domain feature is also set based on the characteristics of the transfer function in the low frequency band in the in-ear state and the ear-out state after the audible domain signal is emitted.
- the frequency domain features we are concerned about are mainly in the low frequency band of the audible domain or the frequency band of the infrasound signal
- the high frequency components can be further filtered out by a low-pass filter after performing frequency domain transformation on the first transfer function, Reduce interference for subsequent analysis.
- the frequency domain characteristics of the first transfer function in the third frequency interval also correspond to the second frequency domain characteristics, where the frequency domain characteristics can be the frequency response curve itself, or the maximum amplitude in the frequency response curve, It may also be the amplitude of multiple sampling frequency points corresponding to the frequency response curve, and the like. For different frequency domain features, different methods may be used to determine the wearing state.
- the frequency domain characteristic of the first transfer function in the third frequency interval may be the maximum magnitude of the frequency response of the first transfer function in the third frequency interval, and the second frequency domain characteristic may be the first Transfer function magnitude threshold.
- the processor 110 may compare the maximum magnitude of the frequency response of the first transfer function in the third frequency range with the magnitude threshold of the first transfer function to determine the wearing state of the wireless earphone 100 .
- the processor 110 may determine that the wireless earphone 100 is in the ear-in state; if the first transfer function is in the If the maximum magnitude of the frequency response in the third frequency range is smaller than the magnitude threshold of the first transfer function, the processor 110 may determine that the wireless earphone 100 is in the out-of-ear state.
- the frequency domain characteristic of the first transfer function in the third frequency interval may be the frequency response curve in the third frequency interval, and the second frequency domain characteristic is also corresponding to the ear-in state and the ear-out state Features A preset frequency response curve, which is recorded as the second frequency response.
- the processor 110 may determine a fourth cross-correlation coefficient between the frequency response of the first transfer function in the third frequency range and the second frequency response, and then determine the wearing state of the wireless earphone 100 according to the fourth cross-correlation coefficient and the fourth correlation threshold .
- the cross-correlation coefficient may be a result obtained by calculating two signals through a cross-correlation function, and is used to represent the degree of similarity between the two signals.
- the fourth cross-correlation coefficient between the curve of the frequency response of the first transfer function in the third frequency range and the curve of the preset second frequency response can be calculated, that is, the degree of similarity between the two can be determined , and then compare the fourth correlation coefficient with the preset fourth correlation threshold, if the fourth correlation coefficient is greater than or equal to the fourth correlation threshold, the processor 110 can determine that the wireless earphone 100 is in the ear state; if the fourth If the correlation coefficient is less than the fourth correlation threshold, the processor 110 may determine that the wireless earphone 100 is in the out-of-ear state.
- the fourth correlation threshold may be specifically set according to specific situations, for example, 90%, which is not specifically limited in this application.
- the frequency domain characteristic of the first transfer function in the third frequency interval may be the amplitude corresponding to multiple sampling frequency points in the third frequency interval of the frequency response of the first transfer function, the first The second frequency domain feature is also the amplitude corresponding to multiple sampling frequency points of the second frequency response preset according to the characteristics corresponding to the ear-in state and the ear-out state.
- the processor 110 may compare the amplitudes corresponding to the multiple sampling frequency points of the frequency response of the first transfer function with the amplitudes corresponding to the preset multiple sampling frequency points in one-to-one correspondence, when the sampling amplitude of the first transfer function When the magnitudes exceeding a certain ratio are greater than or equal to the corresponding preset magnitudes, the processor 110 may determine that the wireless earphone 100 is in the ear-in state; otherwise, the processor 110 may determine that the wireless earphone 100 is in the ear-out state.
- the processor 110 may execute S1023, and this technical solution will be described below.
- the processor 110 acquires signal features of the third audio signal, and determines a wearing state of the wearable device according to the signal features of the third audio signal.
- the earphone is in a relatively closed space in the ear-in state, and is in an open space in the ear-out state. Therefore, when the speaker 140 outputs the same second audio signal, the wireless earphone 100 is in the ear-in state.
- Fb- The third audio signal collected by the mic131 is different from the third audio signal collected by the Fb-mic131 when the wireless earphone 100 is in the ear-out state. Therefore, the embodiment shown in FIG. 4 of the present application determines the wearing state of the wearable device by directly analyzing the characteristics of the third audio signal obtained in S101.
- S1021-S1022 what needs to be extracted is the signal characteristics of the first transfer function, compared with the wireless earphone 100 in the in-ear state and the preset signal characteristics based on the rules and characteristics presented by the first transfer function in the out-of-ear state.
- What needs to be extracted in S1023 is the signal feature of the third audio signal, compared with the signal feature preset according to the rules and characteristics of the third audio signal when the wireless earphone 100 is in the ear state and in the ear state.
- the second audio signal in the implementation of this application is a preset known signal, so it can be directly based on the audio collected by the microphone when the wireless earphone 100 is excited by the second audio signal.
- the regularity of the signal in the ear-in and ear-out states is used as the judgment standard, without further acquisition of the transfer function, the algorithm is simpler, and the detection is faster.
- the signal features may include various parameters, functions, or graphs that characterize the signal features, and specifically include various time-domain features and frequency-domain features obtained through time-domain and frequency-domain analysis.
- the processor 110 can extract the frequency-domain features of the third audio signal, where the frequency-domain features can include frequency-domain transformed characteristic curves such as frequency response, frequency spectrum, energy spectrum, and power spectrum, or can include characteristic curves derived from these characteristics. Features such as amplitude, energy value, and power value that are further extracted from the curve.
- the processor 110 compares the extracted frequency-domain features with the frequency-domain features preset in the processor 110 to determine whether the wireless earphone 100 is in the ear-in state or the ear-out state.
- the frequency domain feature of the third audio signal includes a frequency response of the third audio signal in the second frequency interval.
- the processor 110 may perform frequency domain transformation (such as Fourier transform) on the third audio signal to obtain a frequency response corresponding to the third audio signal, and then extract frequency domain features corresponding to the second frequency range in the frequency response, and then according to the third
- the frequency domain characteristics of the audio signal in the second frequency interval and the first frequency domain characteristics determine the wearing state of the wearable device.
- the second frequency interval is also set corresponding to the frequency interval of the second audio signal.
- the first frequency domain feature is pre-set in the processor 110, and is a frequency domain for judging the wearing state set according to the rules and characteristics of the frequency response of the third audio signal when the wireless earphone 100 is in the ear state and in the ear state. feature.
- the first frequency domain feature is also set based on the law and characteristics of the frequency where the second audio signal is located.
- Fig. 7 shows a frequency response of a third audio signal in an ear-in state and an ear-out state when the second audio signal is an infrasound signal close to 20 Hz. It can be seen that near 20Hz, the degree of distinction between the two is more obvious.
- the first frequency domain feature can be set according to the frequency domain feature of the third audio signal in the ear-in state and the ear-out state.
- the first frequency domain feature can be a similar frequency response curve simulated between the frequency response curves of the third audio signal in the ear-in state and the ear-out state near 20 Hz; the first frequency domain feature can also be based on two A third signal amplitude threshold set between the maximum amplitudes of the two frequency response curves; the first frequency domain feature may also be a plurality of sampling amplitudes obtained after sampling according to the simulated frequency response curve.
- the first frequency domain feature can also be set in other ways with reference to the frequency response of the third audio signal in the ear-in state and the ear-out state, which is not specifically limited in this application.
- the second frequency range will also adapt to the setting of frequency characteristics of the infrasound signal, and the first frequency domain feature will also be based on the infrasound signal after it is transmitted.
- the second audio signal preset by the processor 110 is an audible domain signal
- the second frequency range will also adapt to the audible domain signal.
- the frequency feature is set in the low frequency band, and the first frequency domain feature is also set based on the characteristics of the third audio signal in the low frequency band in the ear-in state and the ear-out state after the audible domain signal is transmitted.
- the frequency domain features we are concerned about are mainly in the low frequency band of the audible domain or the frequency band of the infrasound signal, after performing frequency domain transformation on the third audio signal, the high frequency components can be further filtered out by a low-pass filter, Reduce interference for subsequent analysis.
- the frequency domain characteristics of the third audio signal in the second frequency interval also correspond to the first frequency domain characteristics, where the frequency domain characteristics can be the frequency response curve itself, or the maximum amplitude in the frequency response curve, It may also be the amplitude of multiple sampling frequency points corresponding to the frequency response curve, and the like. For different frequency domain features, different methods may be used to determine the wearing state.
- the frequency domain feature of the third audio signal in the second frequency interval may be the maximum amplitude of the response frequency of the third audio signal in the second frequency interval
- the first frequency domain feature may be the third Signal amplitude threshold.
- the processor 110 may compare the maximum amplitude of the response frequency of the third audio signal in the second frequency range with the third signal amplitude threshold to determine the wearing state of the wireless earphone 100 .
- the processor 110 may determine that the wireless earphone 100 is in the ear-in state; If the maximum amplitude of the response frequency in the two frequency intervals is smaller than the third signal amplitude threshold, the processor 110 may determine that the wireless earphone 100 is in the out-of-ear state.
- the frequency domain characteristics of the third audio signal in the second frequency interval may be the response frequency curve in the second frequency interval, and the first frequency domain characteristics are also corresponding to the in-ear state and the ear-out state Features
- a preset frequency response curve is recorded as the first frequency response.
- the processor 110 may determine a third correlation coefficient between the frequency response of the third audio signal in the second frequency range and the first frequency response, and then determine the wearing state of the wireless earphone 100 according to the third correlation coefficient and the third correlation threshold .
- the processor 110 may determine that the wireless earphone 100 is in the ear-in state; if the third correlation If the number is less than the third correlation threshold, the processor 110 may determine that the wireless earphone 100 is in the out-of-ear state.
- the third correlation threshold may be specifically set according to specific situations, for example, 90%, which is not specifically limited in this application.
- the frequency-domain feature of the third audio signal in the second frequency interval may be the amplitude corresponding to multiple sampling frequency points in the second frequency interval of the frequency response of the third audio signal.
- a frequency domain feature is also the amplitude corresponding to multiple sampling frequency points of the second frequency response preset according to the characteristics corresponding to the ear-in state and the ear-out state.
- the processor 110 may compare the amplitudes corresponding to the multiple sampling frequency points of the frequency response of the third audio signal with the amplitudes corresponding to the preset multiple sampling frequency points in one-to-one correspondence, when the sampling amplitude of the third audio signal When the magnitudes exceeding a certain ratio are greater than or equal to the corresponding preset magnitudes, the processor 110 may determine that the wireless earphone 100 is in the ear-in state; otherwise, the processor 110 may determine that the wireless earphone 100 is in the ear-out state.
- the communication module 160 may communicate with the terminal device connected to the wireless earphone 100 After sending an audio playback instruction, the terminal device can play the corresponding audio according to the last playback record after receiving the audio playback instruction.
- the terminal device may also perform operations such as randomly playing audio, which is not specifically limited in this embodiment of the present application.
- the processor 110 may also send the first wearing indication information to the terminal device connected to the wireless earphone 100 through the communication module 160, where the first wearing indication information is used to indicate that the wireless earphone 100 is in the ear-in state.
- the terminal device After receiving the first wearing instruction information, the terminal device can perform various operations. For example, the terminal device can play audio, and can also output a prompt message to prompt the user whether to play audio, or record the current wearing state of the wireless headset 100 in the memory as the in-ear state, and change the wearable device access state icon in the display screen, etc. . Furthermore, if the terminal device is currently playing audio through its own speaker or/and collecting sound signals with its own microphone, the terminal device can also transmit the audio originally sent to its own speaker after receiving the first wearing instruction information. The signal is sent to the speaker 140 of the wireless earphone 100 for playback, and the sound signal is collected by the microphone 130 of the wireless earphone 100 . In a specific implementation, as long as related operations based on the in-ear state can be implemented, this embodiment of the present application does not specifically limit it.
- the processor 110 may not perform any operation, or may send a message to the terminal device connected to the wireless earphone 100 through the communication module 160.
- the second wearing instruction information is used to indicate that the wireless earphone 100 is in the ear-out state.
- the processor 110 may also start counting from the time when the wireless earphone 100 is determined to be in the ear-out state, and if the counting time exceeds a certain threshold and no indication information is received that the wearing state of the wireless earphone 100 is changed to the ear-in state during the counting process, then Functional components such as the microphone 130 and the speaker 140 of the wireless earphone 110 can be turned off, so that the wireless earphone 110 is in a standby state to save power consumption of the wireless earphone 110.
- a wake-up instruction is received or the sensor detects that the wireless earphone 100 is lifted, etc.
- the disabled functional parts can be further enabled, and then specific operations can be performed according to specific scenarios.
- the processor 110 may perform subsequent operations according to the wearing states of the two wireless earphones 100 .
- the above-mentioned operations that the wireless earphone 100 or the terminal device needs to perform in the in-ear state can be performed; Operations that need to be performed by the wireless headset 100 or the terminal device in the in-ear state.
- any wireless earphone 100 is in the ear-out state, the above-mentioned operations that need to be performed by the wireless earphone 100 or the terminal device in the ear-out state can be performed; Perform the above-mentioned operations that need to be performed by the wireless earphone 100 or the terminal device in the out-of-ear state.
- one of the two wireless earphones 100 may serve as the main earphone, and the other may serve as the auxiliary earphone.
- the auxiliary earphone can send its own wearing state to the main earphone, and the main earphone sends the wearing states (first/second wearing indication information) of the two earphones to the terminal device together.
- S103-S105 may be executed, which will be described in detail below.
- the processor 110 acquires a fifth audio signal collected by the microphone 140.
- the wireless earphone 100 is playing audio
- the processor 110 since the processor 110 is the control center of the wearable device, for the fourth audio signal being played by the wireless earphone 100, the processor 110 may have obtained the terminal device through the communication module 160 before.
- the transmitted fourth audio signal, or possibly the first audio signal has already been stored inside the processor 110, so the processor 110 has actually acquired the fourth audio signal.
- the speaker 140 outputs the fourth audio signal, and since the speaker 140 plays the fourth audio signal to generate a sound signal, the Fb-mic 131 will immediately collect the fifth audio signal.
- the Fb-mic 131 also collects the sound signal, converts the sound signal into a fifth audio signal and sends it to the processor 110 .
- the processor 110 determines a second transfer function between the microphone 130 and the speaker 140 according to the fourth audio signal and the fifth audio signal being played.
- the output is equivalent to the fifth audio signal (the audio signal collected by Fb-mic131), and the input is equivalent to the fourth audio signal (the audio signal output by the loudspeaker 140). Therefore, the fifth audio signal and the second audio signal can be combined
- the ratio of the four audio signals is the first transfer function.
- the ratio of the third audio signal and the second audio signal may be directly calculated according to the time domain signals of the two, or some transformation may be performed on the fifth audio signal and the fourth audio signal to calculate the ratio of the two.
- the first transfer function can be calculated by the ratio of the Laplace transform of the fifth audio signal to the Laplace transform of the fourth audio signal, or can be calculated according to the Fourier transform of the fifth audio signal and the fourth audio signal.
- the ratio of the Fourier transform of the audio signal to calculate the second transfer function can also be used as the second transfer function.
- the ratio of the fourth audio signal to the fifth audio signal can also be used as the second transfer function.
- the function of the relationship between audio signals may be the second transfer function referred to in the embodiment of the present application.
- the fourth audio signal in the embodiment of the present application may be an audio signal in audio playback scenarios such as music and video, and is not a preset audio signal, so its frequency range is an audible domain signal that can be perceived by the human ear. That is, an audio signal with a frequency range of 20-20KHz.
- it since it is a non-preset audio signal, it is generally a signal in a frequency range (for example, 100-1000 Hz).
- the processor 110 acquires frequency domain information of the second transfer function, and determines a wearing state of the wearable device according to the frequency domain information of the second transfer function.
- the processor 110 may extract the frequency domain information of the second transfer function, where the frequency domain information may include frequency response, frequency spectrum, energy spectrum, power spectrum and other frequency domain transformed characteristic curves, and may also include Frequency domain features such as amplitude, energy value, and power value are further extracted from the curve.
- the processor 110 compares the extracted frequency domain information with the frequency domain information preset in the processor 110 to determine whether the wireless earphone 100 is in the ear-in state or the ear-out state.
- the frequency domain information preset in the processor 110 is set based on the characteristics of the frequency domain information of the wireless earphone 100 in the ear-in state and the ear-out state, therefore, the frequency domain information of the second transfer function is compared with the preset After the frequency domain information is compared, the wearing state of the wireless earphone 100 can be determined.
- the frequency domain feature of the second transfer function includes a frequency response of the second transfer function in a fourth frequency interval.
- the processor 110 may perform frequency domain transformation (for example, Fourier transform) on the second transfer function to obtain a frequency response corresponding to the second transfer function, and then extract frequency domain features corresponding to the fourth frequency interval in the frequency response, and then according to the second
- the frequency domain characteristics of the transfer function in the fourth frequency interval and the fourth frequency domain characteristics determine the wearing state of the wearable device.
- the fourth frequency interval may be a lower frequency band in the audible domain range
- the corresponding interval may specifically be 20-300 Hz or a sub-interval therein, or may be an interval larger than 20-300 Hz or a sub-interval therein.
- the fourth frequency domain feature is pre-set in the processor 110, and is set according to the rules and characteristics of the frequency response of the transfer function of the wireless earphone 100 in the ear-in state and the ear-out state for judging the wearing state. Specifically, the fourth frequency domain feature is also set based on the rules and characteristics of the low frequency band corresponding to the fourth audio signal.
- step S1022 For several methods for the processor 110 to determine the wearing state of the wearable device according to the frequency domain characteristics of the second transfer function in the fourth frequency interval and the fourth frequency domain characteristics, reference may be made to step S1022 , which will not be repeated here.
- the processor 110 may not do anything, that is, the wireless earphone 100 will continue to play the audio; the processor 110 may also pass The communication module 160 sends the first wearing instruction information to the terminal device connected to the wireless headset 100.
- the first wearing instruction information is used to indicate that the wireless headset 100 is in the ear state.
- the terminal device can perform various operations. Behavior, for example, the current wearing state of the wireless earphone 100 may be recorded in the memory as the in-ear state.
- the communication module 160 may send an audio stop message to the terminal device connected to the wireless headset 100. Play instruction, the terminal device can pause or stop the currently playing audio after receiving the audio stop playing instruction.
- the processor 110 may also send the second wearing indication information to the terminal device connected to the wireless earphone 100 through the communication module 160, the second wearing indication information is used to indicate that the wireless earphone 100 is in the ear-out state .
- the terminal device can perform various operations. For example, the terminal device can stop playing audio, and can also output a prompt message to prompt the user whether to stop playing audio.
- the terminal device may send the audio originally sent to the speaker 140 of the wireless headset 100 after receiving the second wearing instruction information.
- the signal is sent to its own speaker for playback, and the sound signal is collected through its own microphone.
- the processor 110 may also start counting from the time when the wireless earphone 100 is determined to be in the ear-out state, and if the counting time exceeds a certain threshold and no indication information is received that the wearing state of the wireless earphone 100 is changed to the ear-in state during the counting process, then Functional components such as the microphone 130 and the speaker 140 of the wireless earphone 110 can be turned off, so that the wireless earphone 110 is in a standby state to save power consumption of the wireless earphone 110.
- a wake-up instruction is received or the sensor detects that the wireless earphone 100 is lifted, etc.
- the disabled functional parts can be further enabled, and then specific operations can be performed according to specific scenarios.
- the processor 110 does not use multiple sensor data for wearing detection, but multiplexes the microphone 130 and speaker 140 in the wearable device.
- the microphone The relationship between the audio signal output by the speaker 140 and the audio signal collected by the microphone 130, as well as the characteristics of the audio signal picked up by the wearable device in the ear-in and ear-out states, determine the wearing state of the wearable device and further cooperate with the connected terminal device
- follow-up operations reduce sensor stacking and increase the design flexibility of wearable devices.
- the wearable device distinguishes between the case of not playing audio and the case of playing audio.
- the processor 110 triggers the second audio signal only when the first audio signal is received.
- the output and subsequent wearing detection steps avoid the power consumption caused by the need to turn on the speaker 140 and continuously output the second audio signal during continuous detection; in the case of playing audio, the processor 110 directly uses the audio being played to calculate the transmission The function does not need to output the second audio signal, and also reduces unnecessary power consumption and signal processing.
- accidental touch by the user may also trigger the microphone 130 of the wireless headset 100 to pick up the sound signal, and trigger the processor 110 to further perform wearing detection. If the surrounding environment generated by the user's accidental touch is similar to the human ear environment, it may be detected as in-ear status at this time; Detected as out-of-ear state. In such a case, misidentification of the wearing state of the wearable device may be caused. Therefore, in order to further improve the recognition accuracy of wearing detection and avoid misrecognition caused by accidental touch by the user, on the basis of the embodiment shown in FIG. 3 or FIG.
- the first audio signal may be analyzed first to determine whether the signal feature of the first audio signal satisfies the first wearing detection entry condition.
- the subsequent wearing detection step is further performed, so that the detection accuracy is further improved through the secondary detection.
- the subsequent wearing detection steps are not performed, which can also save the power consumed by the output of the subsequent wearing detection speaker and the signal processing and analysis of the processor, and save energy consumption.
- FIG. 8 is a schematic flowchart of another wearing detection method provided by an embodiment of the present application. The following describes another wearing detection method in detail in conjunction with FIG. 8 .
- the processor 110 determines whether the wearable device is playing audio, if not playing audio, execute S201; if it is playing audio, execute S204-S206.
- the Fb-mic131 will receive the sound signal generated by the contact and convert it into The first audio signal is sent to the processor 110, and after the processor 110 acquires the first audio signal, it judges whether the signal feature of the first audio signal satisfies the first wearing detection entry condition.
- the signal features may include various parameters, functions, or graphs that characterize the signal features, and specifically include various time-domain features and frequency-domain features obtained through time-domain and frequency-domain analysis.
- the first wearing detection entrance condition is set based on the regularity of the audio signal generated by the user touching the wireless earphone 100 when the wireless earphone 100 is in the ear and when the ear is out of the ear.
- the signal feature may be a spectrum feature
- S201 may specifically include:
- the processor 110 acquires spectral features of the first audio signal in the first frequency range.
- the processor 110 determines a first cross-correlation coefficient between the spectral feature of the first audio signal in the first frequency interval and the first spectral feature.
- the processor 110 determines that the signal feature of the first audio signal satisfies the first wearing detection entry condition, and then executes S202-S203; otherwise, determines the signal characteristic of the first audio signal If the signal feature does not meet the first wearing detection entry condition, S202-S203 will not be executed.
- the processor 110 may perform frequency domain transformation (for example, Fourier transformation) on the first audio signal to obtain a frequency spectrum corresponding to the first audio signal, and then extract frequency spectrum features corresponding to the first frequency range in the frequency spectrum.
- FIG. 9 shows the frequency spectrum of the audio signal received by the Fb-mic 131 when the earphone is in the ear in various usage environment scenarios (such as quietness, noise, daily life, etc.). It can be seen that the spectrum of the audio signal generated by the human ear contacting the Fb-mic131 when the earphone is in the ear has good consistency in the low frequency band, that is, the spectrum of the audio signal spectrum in each scene converges in the low frequency band (the spectrum curve in the trend line similar), showing regularity.
- FIG. 9 shows the frequency spectrum of the audio signal received by the Fb-mic 131 when the earphone is in the ear in various usage environment scenarios (such as quietness, noise, daily life, etc.). It can be seen that the spectrum of the audio signal generated by the human ear contacting the
- the Fb-mic 131 shows the spectrum of various types of audio signals received by the Fb-mic 131 (such as the sound of human hands pinching the earphone, the sound of ocean waves, the sound in the forest, and the sound when the earphone is in the ear). It can be seen that the spectrum of the audio signal when the earphone is in the ear is clearly distinguishable from the audio signal of other sounds in the low frequency band. The audio signal when the earphone goes out of the ear also has similar characteristics, and this application only uses some scenarios when the earphone is in the ear as examples.
- the first frequency range can be a frequency range of a low frequency band set according to this characteristic, specifically, it can be 20-300 Hz or any sub-range of 20-300 Hz, or it can be a range wider than 20-300 Hz And the subintervals therein are not specifically limited in this application.
- the first frequency spectrum feature is preset in the processor 110, and is a frequency feature set according to the rules and characteristics of the audio signal triggered after the wireless earphone 100 touches the human ear when it is in and out of the ear.
- the frequency interval of the first spectral feature is also similar to the first frequency interval.
- the spectral characteristics of the first audio signal in the first frequency interval may be the spectral curve in the first frequency interval, and the first spectral characteristics are also preset according to the characteristics of the audio signals sent out when entering and exiting the ear A spectrum curve.
- the processor 110 can also calculate the first cross-correlation coefficient between the spectral feature of the first audio signal in the first frequency interval and the first spectral feature through the cross-correlation function, that is, to determine similarity between the two.
- the first correlation threshold may be specifically set according to specific situations, which is not specifically limited in this application.
- noise reduction processing may be performed on the first audio signal first.
- the frequency domain transformation is performed on the first audio signal, it can also be converted from linear coordinates to logarithmic coordinates to reduce data redundancy, and perform curve smoothing on the obtained spectrum curve, that is, multi-point mean value removal, to obtain a smoother curve .
- the spectral characteristics of the first audio signal in the first frequency interval can also be normalized, so that the spectral characteristics of the first audio signal in the first frequency interval are different from the first frequency spectrum Features can be compared in the same state.
- processing of noise reduction linear coordinate conversion, curve smoothing, and normalization, commonly used processing algorithms in the field of signal processing may be used, and the present application will not repeat them here.
- the signal feature may be a time domain envelope
- S201 may specifically include:
- the processor 110 extracts a time-domain envelope of the first audio signal.
- the processor 110 determines a second correlation coefficient between the time-domain envelope of the first audio signal and the first time-domain envelope.
- the processor 110 determines that the signal feature of the first audio signal satisfies the first wearing detection entry condition, and then executes S202-S203; otherwise, determines the signal characteristic of the first audio signal If the signal feature does not meet the first wearing detection entry condition, S202-S203 will not be executed.
- FIG. 11 shows the time-domain envelope corresponding to the first audio signal shown in FIG. 3
- the processor 110 can divide the first audio signal into multiple segments, and extract the maximum amplitude value of each segment of the signal, Then connect the maximum amplitudes corresponding to the multi-segment signals to form an envelope curve in the time domain.
- the audio signal when the earphone is in the ear still has a relatively high degree of differentiation. High degree of discrimination.
- the first audio signals in the four scenes are also very similar and consistent.
- the audio signal when the earphone goes out of the ear also has similar characteristics, and this application only uses some scenarios when the earphone is in the ear as examples. Therefore, according to the characteristics of the audio signal generated by the human ear in contact with the Fb-mic131 when the earphones are in and out of the ear, in different scenarios, the time domain envelope and other noise envelopes of the audio signal when the earphone is in and out of the ear have obvious differences.
- the degree of discrimination and good consistency can be used as a feature for judging whether the signal feature of the first audio signal satisfies the first wearing detection entry condition. It should be noted that, in order to obtain more accurate results, before performing time-domain envelope extraction on the first audio signal, noise reduction processing may also be performed on the first audio signal.
- the first time-domain envelope is preset in the processor 110, and is a time-domain envelope curve set according to the law and characteristics of the audio signal triggered after the wireless earphone 100 touches the human ear when it is in and out of the ear.
- the processor 110 can also calculate the second cross-correlation coefficient between the time-domain envelope of the first audio signal and the first time-domain envelope through the cross-correlation function, that is, determine the two degree of similarity. If the second correlation coefficient reaches the second correlation threshold, it is determined that the signal feature of the first audio signal meets the first wearing detection entry condition; otherwise, it is determined that the signal feature of the first audio signal does not meet the first wearing detection entry condition.
- the second correlation threshold may be specifically set according to specific situations, which is not specifically limited in this application.
- the processor 110 may combine the above two methods to judge the signal features in both the frequency domain and the time domain, as long as any dimension of the frequency domain dimension and the time domain dimension satisfies the corresponding condition , the processor 110 determines that the signal feature of the first audio signal satisfies the first wearing detection entry condition; only when the frequency domain dimension and the time domain dimension do not meet the corresponding conditions, the processor 110 determines that the signal feature of the first audio signal does not Satisfy the first wearing detection entry condition. That is to say, the processor 110 may execute both steps S2011-S2013 and steps S2014-S2016, as long as any one of the execution results of the two is determined to satisfy the first wearing detection entry condition, the signal characteristic of the first audio signal is determined. Satisfy the first wearing detection entry condition.
- the advantage of this is that it can avoid incorrect detection of a certain dimension in the frequency domain or time domain, resulting in incorrect identification of changes in the wearing state.
- the embodiment of the present application can also adopt other methods and a combination of various methods, as long as It only needs to be able to judge whether to match the preset time-frequency domain information of an audio signal having an ear-in or ear-out characteristic based on the above-mentioned similar time-frequency domain information.
- the signal feature can also be the maximum amplitude in the frequency spectrum corresponding to the first frequency interval, and the corresponding first frequency spectrum feature is also a preset amplitude threshold, and it is determined whether the The first wear detects entry conditions.
- the processor 110 may first judge whether the signal amplitude of the first audio signal reaches the first signal Amplitude threshold, if the first signal amplitude threshold is met, further perform subsequent steps, otherwise, do not perform subsequent steps.
- the processor 110 outputs the second audio signal through the speaker 140 and acquires a third audio signal collected by the microphone 130 .
- the processor 110 acquires signal features of the third audio signal, and determines a wearing state of the wearable device according to the signal features of the third audio signal.
- S202-S203 is similar to that of S101 and S1023 in the above embodiment, therefore, for the specific implementation manner, reference may be made to the corresponding positions of S101 and S1023, which will not be repeated here. It should be noted that S203 here is only an exemplary implementation manner, and S203 in the embodiment of the present application may also be implemented in the manner of S1021-S1022 in FIG. 3 .
- the processor 110 acquires a fifth audio signal collected by the microphone 140 .
- the processor 110 determines a second transfer function between the microphone 130 and the speaker 140 according to the fourth audio signal and the fifth audio signal being played.
- the processor 110 acquires frequency domain information of the second transfer function, and determines a wearing state of the wearable device according to the frequency domain information of the second transfer function.
- S204-S206 is similar to that of S103-S105 in the embodiment shown in FIG. 4 , therefore, reference may be made to the corresponding positions of S103-S105 for specific implementation manners, and details are not repeated here.
- the wireless earphone 100 determines whether the trigger signal satisfies the first entry condition for wearing detection, and only when the first entry condition for wearing detection is met, the wireless earphone 100
- the wearable state of the wearable device may be further determined only when the ear is in or out of the ear. That is to say, on the one hand, when the first wearing detection entry condition is met, the embodiment of the present application detects the audio signals generated in two different scenarios such as the first audio signal and the third audio signal in different ways.
- the dual detection of two audio signals can improve the accuracy of wearing detection to a greater extent and reduce the false detection rate.
- the first wearing detection entry condition is not satisfied, it means that the wireless earphone 100 may have been accidentally touched, etc. At this time, the subsequent wearing detection steps may not be performed, and the speaker will not be turned on to output audio signals, which can save power consume.
- FIG. 13 is a schematic flowchart of another wearing detection method provided by an embodiment of the present application.
- the wireless earphone 100 may include various sensors 170, therefore, the processor 110 may use the sensing data collected by the sensors 170 in the wireless earphone 110, and analyze the sensing data to determine whether further processing is required. Wearing detection, if you need to perform subsequent wearing detection steps, this is equivalent to improving the accuracy of wearing detection through two detections of sensing data and audio signals, and also saving power consumption.
- the embodiment of the present application can be applied to the wireless earphone 100 that has been equipped with various sensors 170, that is to say, if the wireless earphone 100 itself must be equipped with some sensors 170 to achieve other functions, the sensor data can be reused to determine the entry conditions for wearing. judge. Another wearing detection method will be described in detail below with reference to FIG. 13 .
- the processor 110 determines whether the wearable device is playing audio, if not playing audio, execute S301; if it is playing audio, execute S304-S306.
- the processor 110 judges whether the sensing data collected by the sensor meets the second wearing detection entry condition, and if the sensing data meets the second wearing detection entry condition, execute S302-S303.
- the sensor 170 in the wireless earphone 100 can collect sensing data in real time, and when the state of the wireless earphone 100 changes, the sensing data detected by the sensor 170 will also change. It can be understood that when the user puts the earphone in or takes it out of the ear, the sensing data collected by the sensor 170 of the wireless earphone 100 will change. With this feature, it is judged whether the sensing data meets the second wearing detection entry condition.
- the senor 170 may include a proximity sensor, and the processor 110 may determine whether an object approaches or moves away from the wearable device according to the sensing data collected by the proximity sensor.
- the proximity sensor can be a capacitive proximity sensor 173, and the capacitive proximity sensor 173 can detect whether a specific substance is approaching or moving away by detecting changes in capacitance; light signal to determine whether there is approaching or faring away.
- the wireless earphone 100 when the wireless earphone 100 starts to enter the ear, it will gradually approach the human ear, human face and other parts. Therefore, if the proximity sensor determines that the current wearable device is close to a specific object, it can be determined that the sensing data meets the second wearing detection entry condition. When the wireless earphone 100 starts to go out of the ear, it will gradually move away from the human ear, face and other parts. Therefore, if the proximity sensor determines that the current wearable device is far away from a specific object, it can also be determined that the sensing data meets the second wearing detection entry condition.
- the acceleration sensor 171 can be combined. If the wireless earphone 100 If it is in a lifted state and is approaching a specific object, then it can be determined that the sensing data meets the second wearing detection entry condition; or if it is found that the wireless headset 100 is in a falling state according to the sensing data of the acceleration sensor and is moving away from the specific object , then it can also be determined that the sensing data satisfies the second wearing detection entry condition.
- the processor 110 will perform a second detection through the subsequent wearing detection method to more accurately determine The wearing state of the wireless earphone 100 also avoids power consumption caused by continuously outputting the second audio signal during continuous detection.
- the processor 110 outputs the second audio signal through the speaker 140 and acquires a third audio signal collected by the microphone 130 .
- the processor 110 acquires signal features of the third audio signal, and determines a wearing state of the wearable device according to the signal features of the third audio signal.
- S302-S303 are similar to the implementation manners of S101 and S1023 in the foregoing embodiment, therefore, for specific implementation manners, reference may be made to the corresponding positions of S101 and S1023, and details are not repeated here. It should be noted that S303 here is only an exemplary implementation manner, and S303 in the embodiment of the present application may also be implemented in the manner of S1021-S1022 in FIG. 3 .
- the processor 110 acquires a fifth audio signal collected by the microphone 140 .
- the processor 110 determines a second transfer function between the microphone 130 and the speaker 140 according to the fourth audio signal and the fifth audio signal being played.
- the processor 110 acquires frequency domain information of the second transfer function, and determines a wearing state of the wearable device according to the frequency domain information of the second transfer function.
- S304-S306 are similar to those of S103-S105 in the embodiment shown in FIG. 4 , therefore, reference may be made to the corresponding positions of S103-S105 for specific implementation manners, and details are not repeated here.
- the sensing data collected by the existing sensors are reused to judge the wearing entrance conditions, and there is no need to monitor external signals through the microphone 130 and analyze the signals. And processing can also achieve the effect of secondary detection to improve the accuracy of wearing detection. That is to say, on the one hand, when the second wearing detection entry condition is satisfied, the embodiment of the present application detects the data generated by two different scenarios such as the sensor data and the third audio signal, and through the double detection of the two kinds of data , which can improve the accuracy of wearing detection to a greater extent and reduce the false detection rate. On the other hand, if the second wearing detection entry condition is not met, it means that the wireless earphone 100 may have been accidentally touched or shaken. Save power consumption.
- the wearing detection methods involved in the above embodiments of the present application are all executed when the wearable device is in a power-on state or not in a standby state. That is to say, when components such as the microphone 130 and the speaker 140 are in a working state, the embodiments of the present application can be executed.
- a state where the microphone 130 and the speaker 140 are not working such as in a power-off state or a standby state (also called a dormant state)
- other operations such as lifting to wake up, taking the earphone out of the box, turning on the wearable device, etc.
- modules such as the microphone 130 and the speaker 140 work, so as to further implement the wearing detection solution of the embodiment of the present application.
- all or part of them may be implemented by software, hardware, firmware or any combination thereof.
- software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present invention will be generated in whole or in part.
- the computer can be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server, or data center by wired (eg, coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg, infrared, wireless, microwave, etc.).
- the computer-readable storage medium may be any available medium that can be accessed by a computer, or a data storage device such as a server or a data center integrated with one or more available media.
- the available medium may be a magnetic medium (for example, a floppy disk, a hard disk, or a magnetic tape), an optical medium (for example, DVD), or a semiconductor medium (for example, a Solid State Disk (SSD)).
- SSD Solid State Disk
- the storage medium may be a magnetic disk, an optical disk, a read-only memory (Read-Only Memory, ROM) or a random access memory (Random Access Memory, RAM), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Circuit For Audible Band Transducer (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
Abstract
Description
Claims (36)
- 一种佩戴检测方法,其特征在于,应用于可穿戴设备,所述可穿戴设备包括麦克风和扬声器;所述方法包括:当所述可穿戴设备未播放音频且获取到所述麦克风采集的第一音频信号时,通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号;获取所述第三音频信号的信号特征,根据所述第三音频信号的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求1所述的方法,其特征在于,所述通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号之前,还包括:确定所述第一音频信号的信号特征满足佩戴检测入口条件。
- 根据权利要求2所述的方法,其特征在于,所述确定所述第一音频信号的信号特征满足佩戴检测入口条件包括:获取所述第一音频信号在第一频率区间的频谱特征;确定所述第一音频信号在第一频率区间的频谱特征与第一频谱特征的第一互相关系数;当所述第一互相关系数达到第一相关性阈值,确定所述第一音频信号的信号特征满足所述佩戴检测入口条件。
- 根据权利要求3所述的方法,其特征在于,所述第一频率区间包括20~300Hz或20~300Hz中的任意子区间。
- 根据权利要求2-4任一项所述的方法,其特征在于,所述确定所述第一音频信号的信号特征满足佩戴检测入口条件包括:提取所述第一音频信号的时域包络;确定所述第一音频信号的时域包络与第一时域包络的第二互相关系数;当所述第二互相关系数达到第二相关性阈值,确定所述第一音频信号的信号特征满足所述佩戴检测入口条件。
- 根据权利要求1-5任一项所述的方法,其特征在于,所述通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号之前,还包括:确定所述第一音频信号的信号幅值达到第一信号幅度阈值。
- 根据权利要求6所述的方法,其特征在于,所述确定所述第一音频信号的信号幅值达到第一信号幅度阈值包括:确定所述第一音频信号的有效值达到第一幅度阈值;或,确定所述第一音频信号的平均幅值达到第二幅度阈值;或确定所述第一音频信号的最大幅值达到第三幅度阈值。
- 根据权利要求1-7任一项所述的方法,其特征在于,所述获取所述第三音频信号的信号 特征,根据所述第三音频信号的信号特征,确定所述可穿戴设备的佩戴状态包括:对所述第三音频信号进行频域变换,获取所述第三音频信号在第二频率区间的频域特征;根据所述第三音频信号在第二频率区间的频域特征与第一频域特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求8所述的方法,其特征在于,所述第一频域特征包括第三信号幅度阈值;所述根据所述第三音频信号在第二频率区间的频域特征与第一频域特征,确定所述可穿戴设备的佩戴状态,包括:根据所述第三音频信号在第二频率区间的频率响应的最大幅值与第三信号幅度阈值,确定所述可穿戴设备的佩戴状态。
- 根据权利要求8所述的方法,其特征在于,所述第一频域特征包括第一频率响应;所述根据所述第三音频信号在第二频率区间的频域特征与第一频域特征,确定所述可穿戴设备的佩戴状态,包括:确定第三音频信号在第二频率区间的频率响应与第一频率响应的第三互相关系数;根据所述第三互相关系数与第三相关性阈值,确定所述可穿戴设备的佩戴状态。
- 根据权利要求8-10任一项所述的方法,其特征在于,所述第二音频信号为频率范围小于20Hz的次声信号,所述第二频率区间包括0~20Hz或0~20Hz中的任意子区间;或,所述第二音频信号为频率范围在20~20KHz的可听域信号,所述第二频率区间包括20~300Hz或20~300Hz中的任意子区间。
- 根据权利要求1-11任一项所述的方法,其特征在于,所述方法还包括:当确定所述可穿戴设备正在播放音频时,获取所述麦克风采集的第五音频信号;根据正在播放的第四音频信号和所述第五音频信号,确定所述麦克风和所述扬声器之间的传递函数;获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求12所述的方法,其特征在于,所述获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态,包括:对所述传递函数进行频域变换,获取所述传递函数在第三频率区间的频域特征;根据所述传递函数在第三频率区间的频域特征与第二频域特征,确定所述可穿戴设备的佩戴状态。
- 一种佩戴检测方法,其特征在于,应用于可穿戴设备,所述可穿戴设备包括麦克风和扬声器;所述方法包括:当所述可穿戴设备未播放音频且获取到所述麦克风采集的第一音频信号时,通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号;根据所述第二音频信号和所述第三音频信号,确定所述麦克风和所述扬声器之间的传递函数;获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态。
- 一种佩戴检测方法,其特征在于,应用于可穿戴设备,所述可穿戴设备包括麦克风、扬声器和传感器;所述方法包括:当所述可穿戴设备未播放音频且确定所述传感器采集到的传感数据满足佩戴检测入口条件时,通过所述扬声器输出第一音频信号并获取所述麦克风采集到的第二音频信号;获取所述第二音频信号的信号特征,根据所述第二音频信号的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求15所述的方法,其特征在于,所述传感器包括接近传感器,所述确定所述传感器采集到的传感数据满足佩戴检测入口条件,包括:根据所述接近传感器采集到的传感数据,确定有物体接近或远离所述可穿戴设备,则确定所述传感器采集到的传感数据满足佩戴检测入口条件。
- 一种佩戴检测方法,其特征在于,应用于可穿戴设备,所述可穿戴设备包括麦克风、扬声器和传感器;所述方法包括:当所述可穿戴设备未播放音频且确定所述传感器采集到的传感数据满足佩戴检测入口条件时,通过所述扬声器输出第一音频信号并获取所述麦克风采集到的第二音频信号;根据所述第一音频信号和所述第二音频信号,确定所述麦克风和所述扬声器之间的传递函数;获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态。
- 一种可穿戴设备,其特征在于,包括麦克风、扬声器、存储器和处理器,其中,所述麦克风用于接收声音信号并转换为音频信号;所述扬声器用于将音频信号转换为声音信号输出;所述存储器用于存储计算机可读指令;所述处理器用于读取所述计算机可读指令并行以下步骤:当所述可穿戴设备未播放音频且获取到所述麦克风采集的第一音频信号时,通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号;获取所述第三音频信号的信号特征,根据所述第三音频信号的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求18所述的可穿戴设备,其特征在于,所述通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号之前,所述处理器还用于:确定所述第一音频信号的信号特征满足佩戴检测入口条件。
- 根据权利要求19所述的可穿戴设备,其特征在于,所述处理器具体用于:获取所述第一音频信号在第一频率区间的频谱特征;确定所述第一音频信号在第一频率区间的频谱特征与第一频谱特征的第一互相关系数;当所述第一互相关系数达到第一相关性阈值,确定所述第一音频信号的信号特征满足所述佩戴检测入口条件。
- 根据权利要求20所述的可穿戴设备,其特征在于,述第一频率区间包括20~300Hz或20~300Hz中的任意子区间。
- 根据权利要求19-21任一项所述的可穿戴设备,其特征在于,所述处理器具体用于:提取所述第一音频信号的时域包络;确定所述第一音频信号的时域包络与第一时域包络的第二互相关系数;当所述第二互相关系数达到第二相关性阈值,确定所述第一音频信号的信号特征满足所述佩戴检测入口条件。
- 根据权利要求18-22任一项所述的可穿戴设备,其特征在于,所述通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号之前,所述处理器还用于:确定所述第一音频信号的信号幅值达到第一信号幅度阈值。
- 根据权利要求23所述的可穿戴设备,其特征在于,所述处理器具体用于:确定所述第一音频信号的有效值达到第一幅度阈值;或,确定所述第一音频信号的平均幅值达到第二幅度阈值;或确定所述第一音频信号的最大幅值达到第三幅度阈值。
- 根据权利要求18-24任一项所述的可穿戴设备,其特征在于,所述处理器具体用于:对所述第三音频信号进行频域变换,获取所述第三音频信号在第二频率区间的频域特征;根据所述第三音频信号在第二频率区间的频域特征与第一频域特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求25所述的可穿戴设备,其特征在于,所述第一频域特征包括第三信号幅度阈值;所述处理器具体用于:根据所述第三音频信号在第二频率区间的频率响应的最大幅值与第三信号幅度阈值,确定所述可穿戴设备的佩戴状态。
- 根据权利要求25所述的可穿戴设备,其特征在于,所述第一频域特征包括第一频率响应;所述处理器具体用于:确定第三音频信号在第二频率区间的频率响应与第一频率响应的第三互相关系数;根据所述第三互相关系数与第三相关性阈值,确定所述可穿戴设备的佩戴状态。
- 根据权利要求25-27任一项所述的可穿戴设备,其特征在于,所述第二音频信号为频率范围小于20Hz的次声信号,所述第二频率区间包括0~20Hz或0~20Hz中的任意子区间;或,所述第二音频信号为频率范围在20~20KHz的可听域信号,所述第二频率区间包括20~300Hz或20~300Hz中的任意子区间。
- 根据权利要求18-28任一项所述的可穿戴设备,其特征在于,所述处理器还用于:当确定所述可穿戴设备正在播放音频时,获取所述麦克风采集的第五音频信号;根据正在播放的第四音频信号和所述第五音频信号,确定所述麦克风和所述扬声器之间的传递函数;获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求28所述的可穿戴设备,其特征在于,所述处理器具体用于:对所述传递函数进行频域变换,获取所述传递函数在第三频率区间的频域特征;根据所述传递函数在第三频率区间的频域特征与第二频域特征,确定所述可穿戴设备的佩戴状态。
- 一种可穿戴设备,其特征在于,包括麦克风、扬声器、存储器和处理器,其中,所述麦克风用于接收声音信号并转换为音频信号;所述扬声器用于将音频信号转换为声音信号输出;所述存储器用于存储计算机可读指令;所述处理器用于读取所述计算机可读指令并行以下步骤:当所述可穿戴设备未播放音频且获取到所述麦克风采集的第一音频信号时,通过所述扬声器输出第二音频信号并获取所述麦克风采集的第三音频信号;根据所述第二音频信号和所述第三音频信号,确定所述麦克风和所述扬声器之间的传递函数;获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态。
- 一种可穿戴设备,其特征在于,包括麦克风、扬声器、传感器、存储器和处理器,其中,所述麦克风用于接收声音信号并转换为音频信号;所述扬声器用于将音频信号转换为声音信号输出;所述传感器用于采集传感数据;所述存储器用于存储计算机可读指令;所述处理器用于读取所述计算机可读指令以执行以下步骤:当所述可穿戴设备未播放音频且确定所述传感器采集到的传感数据满足佩戴检测入口条件时,通过所述扬声器输出第一音频信号并获取所述麦克风采集到的第二音频信号;获取所述第二音频信号的信号特征,根据所述第二音频信号的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求32所述的可穿戴设备,其特征在于,所述传感器包括接近传感器,所述处理器具体用于:根据所述接近传感器采集到的传感数据,确定有物体接近或远离所述可穿戴设备,则确定所述传感器采集到的传感数据满足佩戴检测入口条件。
- 一种可穿戴设备,其特征在于,包括麦克风、扬声器、传感器、存储器和处理器,其 中,所述麦克风用于接收声音信号并转换为音频信号;所述扬声器用于将音频信号转换为声音信号输出;所述传感器用于采集传感数据;所述存储器用于存储计算机可读指令;所述处理器用于读取所述计算机可读指令以执行以下步骤:当所述可穿戴设备未播放音频且确定所述传感器采集到的传感数据满足佩戴检测入口条件时,通过所述扬声器输出第一音频信号并获取所述麦克风采集到的第二音频信号;根据所述第一音频信号和所述第二音频信号,确定所述麦克风和所述扬声器之间的传递函数;获取所述传递函数的信号特征,根据所述传递函数的信号特征,确定所述可穿戴设备的佩戴状态。
- 根据权利要求18-34任一项所述的可穿戴设备,其特征在于,所述麦克风包括反馈式麦克风。
- 一种计算机存储介质,其特征在于,存储有计算机可读指令,且所述计算机可读指令在被处理器执行时实现如权利要求1-17任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US18/559,321 US20240152313A1 (en) | 2021-05-07 | 2022-05-06 | Wearing detection method, wearable device, and storage medium |
EP22798648.6A EP4294037A4 (en) | 2021-05-07 | 2022-05-06 | PORT DETECTION METHOD, HABITRONIC DEVICE AND STORAGE MEDIUM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110496911.8 | 2021-05-07 | ||
CN202110496911.8A CN115314804A (zh) | 2021-05-07 | 2021-05-07 | 佩戴检测方法、可穿戴设备及存储介质 |
Publications (3)
Publication Number | Publication Date |
---|---|
WO2022233308A1 WO2022233308A1 (zh) | 2022-11-10 |
WO2022233308A9 true WO2022233308A9 (zh) | 2023-01-19 |
WO2022233308A8 WO2022233308A8 (zh) | 2023-11-02 |
Family
ID=83853489
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/091059 WO2022233308A1 (zh) | 2021-05-07 | 2022-05-06 | 佩戴检测方法、可穿戴设备及存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240152313A1 (zh) |
EP (1) | EP4294037A4 (zh) |
CN (1) | CN115314804A (zh) |
WO (1) | WO2022233308A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115866473A (zh) * | 2022-12-20 | 2023-03-28 | 昆山联滔电子有限公司 | 耳机及耳机状态检测方法 |
CN117319870B (zh) * | 2023-11-09 | 2024-05-17 | 深圳市通力科技开发有限公司 | 一种耳机佩戴状态检测方法、装置、耳机和存储介质 |
CN117528333B (zh) * | 2024-01-05 | 2024-04-12 | 九音科技(南京)有限公司 | 耳戴式音频设备的状态检测方法、装置、音频设备及介质 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9967647B2 (en) * | 2015-07-10 | 2018-05-08 | Avnera Corporation | Off-ear and on-ear headphone detection |
CN111988690B (zh) * | 2019-05-23 | 2023-06-27 | 小鸟创新(北京)科技有限公司 | 一种耳机佩戴状态检测方法、装置和耳机 |
CN110677768A (zh) * | 2019-10-31 | 2020-01-10 | 歌尔科技有限公司 | 一种无线耳机控制方法、装置及无线耳机和存储介质 |
CN112013949A (zh) * | 2020-08-06 | 2020-12-01 | 歌尔科技有限公司 | 耳机佩戴状态的确定方法、装置及耳机 |
CN111988692B (zh) * | 2020-08-07 | 2022-11-15 | 歌尔科技有限公司 | 耳机佩戴状态检测方法、装置、耳机及存储介质 |
CN112272346B (zh) * | 2020-11-27 | 2023-01-24 | 歌尔科技有限公司 | 入耳检测方法、耳机及计算机可读存储介质 |
-
2021
- 2021-05-07 CN CN202110496911.8A patent/CN115314804A/zh active Pending
-
2022
- 2022-05-06 EP EP22798648.6A patent/EP4294037A4/en active Pending
- 2022-05-06 WO PCT/CN2022/091059 patent/WO2022233308A1/zh active Application Filing
- 2022-05-06 US US18/559,321 patent/US20240152313A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
CN115314804A (zh) | 2022-11-08 |
US20240152313A1 (en) | 2024-05-09 |
WO2022233308A8 (zh) | 2023-11-02 |
WO2022233308A1 (zh) | 2022-11-10 |
EP4294037A1 (en) | 2023-12-20 |
EP4294037A4 (en) | 2024-07-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022233308A9 (zh) | 佩戴检测方法、可穿戴设备及存储介质 | |
US10381021B2 (en) | Robust feature extraction using differential zero-crossing counts | |
US9721560B2 (en) | Cloud based adaptive learning for distributed sensors | |
US9860626B2 (en) | On/off head detection of personal acoustic device | |
US9785706B2 (en) | Acoustic sound signature detection based on sparse features | |
US9412373B2 (en) | Adaptive environmental context sample and update for comparing speech recognition | |
WO2021114953A1 (zh) | 语音信号的采集方法、装置、电子设备以及存储介质 | |
CN102172044B (zh) | 音频输出的控制方法及设备 | |
US9460720B2 (en) | Powering-up AFE and microcontroller after comparing analog and truncated sounds | |
CN108710615B (zh) | 翻译方法及相关设备 | |
WO2020244257A1 (zh) | 语音唤醒方法、系统、电子设备及计算机可读存储介质 | |
US10551973B2 (en) | Method of controlling a mobile device | |
US10582290B2 (en) | Earpiece with tap functionality | |
US9558758B1 (en) | User feedback on microphone placement | |
US11297429B2 (en) | Proximity detection for wireless in-ear listening devices | |
CN113630708A (zh) | 耳机麦克风异常检测的方法、装置、耳机套件及存储介质 | |
CN112259124B (zh) | 基于音频频域特征的对话过程捂嘴手势识别方法 | |
GB2516075A (en) | Sensor input recognition | |
CN110806850A (zh) | 一种耳机及其自动音量调节控制模块与方法及存储介质 | |
WO2020019822A1 (zh) | 麦克风堵孔检测方法及相关产品 | |
EP4422205A1 (en) | Earphone control method, related system and storage medium | |
WO2019238061A1 (zh) | 通过人体振动识别用户语音的方法和设备 | |
GB2553040A (en) | Sensor input recognition | |
WO2022254834A1 (ja) | 信号処理装置、信号処理方法およびプログラム | |
US20220230657A1 (en) | Voice control method and apparatus, chip, earphones, and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22798648 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022798648 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022798648 Country of ref document: EP Effective date: 20230914 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 18559321 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |