WO2023185698A1 - 一种佩戴检测方法及相关装置 - Google Patents

一种佩戴检测方法及相关装置 Download PDF

Info

Publication number
WO2023185698A1
WO2023185698A1 PCT/CN2023/083912 CN2023083912W WO2023185698A1 WO 2023185698 A1 WO2023185698 A1 WO 2023185698A1 CN 2023083912 W CN2023083912 W CN 2023083912W WO 2023185698 A1 WO2023185698 A1 WO 2023185698A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
electronic device
mounted device
ultrasonic wave
state
Prior art date
Application number
PCT/CN2023/083912
Other languages
English (en)
French (fr)
Inventor
唐舜尧
王贺
吴伟鑫
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023185698A1 publication Critical patent/WO2023185698A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object

Definitions

  • the present application relates to the field of terminal technology, and in particular, to a wearing detection method and related devices.
  • head-mounted device market is becoming increasingly large.
  • the purpose of head-mounted devices is to explore new ways of human-computer interaction.
  • Smart devices provide consumers with exclusive, multi-functional, personalized and more convenient services by being worn on the human body.
  • the device will not affect the working status of the headset. For example, when the headset is playing audio and the user takes off the headset, the device will still continue to play. The audio. This will increase the energy consumption of the device.
  • This application provides a wearing detection method and related devices.
  • the head-mounted device can send ultrasonic waves through the speaker, receive the ultrasonic waves through the microphone, and determine the wearing status of the head-mounted device based on the sent ultrasonic waves and the received ultrasonic waves.
  • Implementing the display method provided by this application can perform wearing detection based on the existing components of the head-mounted device, saving costs.
  • the head-mounted device does not add other components for wearing detection, the number of wear detection of the head-mounted device will not be increased. weight, reducing the pressure on the user’s cervical spine from the head-mounted device. Since ultrasonic waves cannot be heard by the human ear and do not cause harm to the human body, the wearing status of the head-mounted device can be senselessly detected without the user noticing.
  • embodiments of the present application provide a head-mounted device, including: a first speaker, a microphone, and a processor;
  • the first speaker is used to send the first ultrasonic wave; the microphone is used to receive the second ultrasonic wave, and the second ultrasonic wave is at least a part of the first ultrasonic wave received by the microphone; when the difference between the amplitude of the first ultrasonic wave and the amplitude of the second ultrasonic wave When the value is the first value, the head-mounted device is configured to be in the first state; when the difference between the amplitude of the first ultrasonic wave and the amplitude of the second ultrasonic wave is the second value, the head-mounted device is configured to be in the first state. A second state in which the first value and the second value are different.
  • the head-mounted device provided by the embodiments of the present application can better automatically control the working mode of the head-mounted device based on application scenarios and more intelligently.
  • the first state is a worn state
  • the second state is a non-worn state
  • the power consumption of the head-mounted device in the first state is greater than the power consumption of the head-mounted device in the second state.
  • the head-mounted device is configured to be in the second state, and the energy consumption of the head-mounted device is reduced, which can not only save the power consumption of the head-mounted device, extend the battery life of the head-mounted device, but also not affect the normal operation of the user.
  • Use this headset For example, when the head-mounted device is configured to be in the worn state, the interface of the application is displayed. When the head-mounted device is configured to be in the unworn state, the screen can be turned off (also called screen off, screen off). Moreover, when the head-mounted device is configured to be worn again, the screen can be turned on and the content displayed before the screen can continue to be displayed.
  • the microphone is located in the first component
  • the first speaker is located in the second component
  • the first component and the second component are different
  • the first value is greater than the second value
  • the first value ranges from 40 dB to 100 dB, and/or the second value ranges from 0 dB to 40 dB.
  • the microphone and the first speaker are both located on the first component, and the first value is smaller than the second value.
  • the first value ranges from 0 dB to 40 dB, and/or the second value ranges from 40 dB to 100 dB.
  • the microphone is also configured to receive a third ultrasonic wave sent by a non-first speaker before the first speaker sends the first ultrasonic wave; the first ultrasonic wave is configured to be different from the third ultrasonic wave.
  • the head-mounted device acquires the third ultrasonic wave in the environment in advance and configures the first ultrasonic wave to be different from the third ultrasonic wave, when there are other electronic devices near the head-mounted device, the ultrasonic signals emitted by other electronic devices will It will not affect the wear detection function of the headset.
  • the first ultrasonic wave is configured to be different from the third ultrasonic wave, including: the first ultrasonic wave is configured to have a different frequency and/or a different duty cycle than the third ultrasonic wave.
  • the frequency of the first ultrasonic wave and the third ultrasonic wave are different, including: the difference between the frequency of the first ultrasonic wave and the frequency of the third ultrasonic wave is greater than the first frequency difference. In this way, it is possible to avoid mistaking the third ultrasonic wave for the first ultrasonic wave and improve the accuracy of the detection results.
  • the head-mounted device further includes a second speaker; the second speaker is configured to send an audible sound wave signal, and the frequency of the audible sound wave signal is different from the frequency of the first ultrasonic wave.
  • the head-mounted device includes multiple speakers, which can meet the user's needs (for example, listening to music, making phone calls, etc.) and at the same time detect the wearing status of the head-mounted device without the user being aware of it.
  • the frequency of the first ultrasonic wave is greater than 20,000 Hz, and the frequency of the audible sound wave signal is greater than 0 and less than or equal to 20,000 Hz.
  • the first speaker is specifically configured to send a first ultrasonic wave in a first time period; the second speaker is specifically configured to send an audible sound wave signal in a first time period.
  • the headset can transmit the first ultrasonic wave and the audible sound wave signal simultaneously.
  • the first speaker is specifically configured to send a first ultrasonic wave in a first time period and an audible sound wave signal in a second time period.
  • the frequency of the audible sound wave signal and the frequency of the first ultrasonic wave are The frequency is different.
  • the first speaker is also used to send a first ultrasonic wave in a third time period and an audible sound wave signal in a fourth time period
  • the second time period is after the first time period
  • the third time period is after the second time period
  • the fourth time period is after the third time period.
  • the head mounted device can transmit the first ultrasonic wave and the audible sound wave signal at intervals during the period of transmitting the first ultrasonic wave. Due to the auditory persistence phenomenon, users can misunderstand audible sound wave signals played at intervals as continuous playback, which does not affect the audio playback function of the user's head-mounted device.
  • the first time period and the second time period are sent at periodic intervals, the range of the first time period includes 5 ms to 15 ms, and the range of the second time period includes 20 ms. to 40ms.
  • the first time period includes the first 10 ms of every 33 ms during which the first speaker sends the first ultrasonic wave
  • the second time period includes the last 23 ms of the 33 ms, that is, the first speaker sends the first ultrasonic wave for 10 ms, and the first speaker sends the audible sound wave.
  • the first speaker sends the first ultrasonic wave for 10 ms, and the first speaker sends the audible sound wave signal for 23 ms; the first ultrasonic wave and the audible sound wave signal are sent in a loop as above. It can ensure the transmission of the first ultrasonic wave without affecting the continuity of the audible sound wave signal perceived by the user.
  • the first speaker is also configured to send a prefix signal before sending the first ultrasonic wave, and the prefix signal is used to identify the first ultrasonic wave. In this way, the head-mounted device can be facilitated to identify the first ultrasonic wave based on the prefix signal.
  • the device type of the head-mounted device includes any of the following: smart glasses, headgear headsets, augmented reality AR glasses, virtual display VR glasses, mixed reality MR glasses, smart helmets.
  • the head-mounted device is glasses
  • the first component is the left temple
  • the second component is the right temple
  • the first component is the right temple
  • the second component is the left temple.
  • the side temples, or the first part is the nose pads
  • the second part is the left temples, or the first part is the nose pads, and the second part is the right temples.
  • the head-mounted device is AR glasses
  • the processor is further configured to play the first video before the head-mounted device is configured to be in the first state or the second state; the processor is further The processor is configured to continue playing the first video after the head-mounted device is configured to be in the first state; the processor is also configured to pause the playing of the first video after the head-mounted device is configured to be in the second state.
  • the head-mounted device can be prevented from continuing to play the first video when it is not worn, thereby reducing the power consumption of the head-mounted device, and the audio file can also be continued to be played when the head-mounted device is switched to the worn state again.
  • the part that the user has not watched yet is included to improve the user experience.
  • the processor is also configured to play the first audio before the head-mounted device is configured to be in the first state or the second state; the processor is also configured to play the first audio when the head-mounted device is configured to be in the first state or the second state. After being configured to be in the first state, continue playing the first audio; the processor is also configured to pause playing the first audio after the head-mounted device is configured to be in the second state.
  • the head-mounted device can not only prevent the head-mounted device from continuing to play the audio file when it is not worn, but also reduce the power consumption of the head-mounted device. It can also continue to play the audio file when the head-mounted device switches to the worn state again. The part that the user has not yet listened to improves the user experience.
  • the head-mounted device further includes a proximity sensor, which includes a capacitive sensor and an inertial measurement unit; and the proximity sensor is configured to notify the first speaker after detecting the user's operation close to the head-mounted device. Send the first ultrasound.
  • a proximity sensor which includes a capacitive sensor and an inertial measurement unit; and the proximity sensor is configured to notify the first speaker after detecting the user's operation close to the head-mounted device. Send the first ultrasound.
  • embodiments of the present application provide a wearing detection method, applied to a head-mounted device including a microphone and a first speaker, wherein the method includes: the head-mounted device sends a first ultrasonic wave through the first speaker; The head-mounted device receives the second ultrasonic wave through the microphone, and the second ultrasonic wave is at least part of the first ultrasonic wave received by the microphone; when the difference between the amplitude of the first ultrasonic wave and the amplitude of the second ultrasonic wave is a first value, the head-mounted device The head-mounted device is configured to be in a first state; when the difference between the amplitude of the first ultrasonic wave and the amplitude of the second ultrasonic wave is a second value, the head-mounted device is configured to be in the second state, wherein the first value Different from the second value.
  • the first state is a worn state
  • the second state is a non-worn state
  • the power consumption of the head-mounted device in the first state is greater than the power consumption of the head-mounted device in the second state.
  • the microphone is located in the first component
  • the first speaker is located in the second component
  • the first component and the second component are different
  • the first value is greater than the second value
  • the first value ranges from 40 dB to 100 dB, and/or the second value ranges from 0 dB to 40 dB.
  • the microphone and the first speaker are both located on the first component, and the first value is smaller than the second value.
  • the first value ranges from 0 dB to 40 dB, and/or the second value ranges from 40 dB to 100 dB.
  • the method before the head-mounted device sends the first ultrasonic wave through the first speaker, the method further includes: the head-mounted device receives a third ultrasonic wave sent by a microphone other than the first speaker, and the first ultrasonic wave is configured for and The third ultrasound is different.
  • the first ultrasonic wave is configured to be different from the third ultrasonic wave, including: the first ultrasonic wave is configured to have a different frequency and/or a different duty cycle than the third ultrasonic wave.
  • the frequency of the first ultrasonic wave and the third ultrasonic wave are different, including: the difference between the frequency of the first ultrasonic wave and the frequency of the third ultrasonic wave is greater than the first frequency difference.
  • the head-mounted device further includes a second speaker; the method further includes: the head-mounted device sends an audible sound wave signal through the second speaker, and the frequency of the audible sound wave signal is different from the frequency of the first ultrasonic wave. .
  • the method further includes: the head-mounted device sending a first ultrasonic wave through a first speaker within a first time period; and the head-mounted device sending an audible sound wave through a second speaker within a first time period. Signal.
  • the head-mounted device sends the first ultrasonic wave through the first speaker, which specifically includes: the head-mounted device sends the first ultrasonic wave through the first speaker within a first time period; An audible sound wave signal is sent through the first loudspeaker within two time periods.
  • the head-mounted device sends the first ultrasonic wave through the first speaker, which specifically includes: the head-mounted device sends the first ultrasonic wave through the first speaker within a first time period; The audible sound wave signal is sent through the first speaker in the second time period; the head-mounted device sends the first ultrasonic wave through the first speaker in the third time period; the head-mounted device sends the audible sound wave signal through the first speaker in the fourth time period.
  • acoustic signal is after the first time period
  • the third time period is after the second time period
  • the fourth time period is after the third time period.
  • the first time period and the second time period are sent at periodic intervals, the range of the first time period includes 5 ms to 15 ms, and the range of the second time period includes 20 ms. to 40ms.
  • the first time period includes the first 10 ms of every 33 ms during which the first speaker sends the first ultrasonic wave, and the second time period includes the last 23 ms of the 33 ms.
  • the method before the head-mounted device sends the first ultrasonic wave through the first speaker, the method further includes: the head-mounted device sends a prefix signal through the first speaker, where the prefix signal is used to identify the first ultrasonic wave.
  • the device type of the head-mounted device includes any of the following: smart glasses, headsets, augmented reality AR glasses, virtual display VR glasses, mixed reality MR glasses, and smart helmets.
  • the head-mounted device is glasses
  • the first component is the left temple
  • the second component is the right temple
  • the first component is the right temple
  • the second component is the left temple.
  • the side temples, or the first part is the nose pads
  • the second part is the left temples, or the first part is the nose pads, and the second part is the right temples.
  • the head-mounted device is AR glasses
  • the method further includes: before the head-mounted device is configured to be in the first state or the second state, the head-mounted device plays the first video; After the head-mounted device is configured to be in the first state, the head-mounted device continues to play the first video; after the head-mounted device is configured to be in the second state, the head-mounted device pauses playing the first video.
  • the method further includes: before the head-mounted device is configured to be in the first state or the second state, the head-mounted device plays the first audio; After the first state, the head-mounted device continues to play the first audio; after the head-mounted device is configured to be in the second state, the head-mounted device pauses playing the first audio.
  • the head-mounted device further includes a proximity sensor, and the proximity sensor includes a capacitive sensor and an inertial measurement unit; before the head-mounted device sends the first ultrasonic wave through the first speaker, the method further includes: the head-mounted device After detecting the user's operation close to the head-mounted device through the proximity sensor, the first speaker is notified to send the first ultrasonic wave.
  • embodiments of the present application provide a computer storage medium in which a computer program is stored.
  • the computer program includes executable instructions, which when executed by the processor cause the processor to perform operations corresponding to the wearing detection method provided in the second aspect.
  • embodiments of the present application provide a computer program product.
  • the computer program product When the computer program product is run on a head-mounted device, it causes the head-mounted device to execute the implementation manner of the second aspect.
  • Figure 1 is a schematic structural diagram of an electronic device 100 provided by an embodiment of the present application.
  • Figure 2 is a schematic diagram of the hardware structure of the electronic device 100 provided by the embodiment of the present application.
  • Figure 3 is a schematic diagram of a wearing state of an electronic device 100 provided by an embodiment of the present application.
  • Figure 4 is a schematic flow chart of a wearing detection method provided by an embodiment of the present application.
  • Figure 5 is a schematic waveform diagram of an ultrasonic signal provided by an embodiment of the present application.
  • Figure 6 is a schematic flowchart of an electronic device 100 determining a detection signal according to an embodiment of the present application
  • Figure 7 is a schematic diagram of a prefix signal provided by an embodiment of the present application.
  • Figure 8A is a time domain schematic diagram of a detection signal provided by an embodiment of the present application.
  • Figure 8B is a frequency domain schematic diagram of a detection signal provided by an embodiment of the present application.
  • Figure 8C is a frequency-amplitude diagram of an acoustic signal provided by an embodiment of the present application.
  • Figure 9 is a schematic diagram of the distribution of microphones and speakers provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of a wearing state of another electronic device 100 provided by an embodiment of the present application.
  • Figure 11 is a schematic flow chart of a wearing detection method provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of the distribution of microphones and speakers provided by an embodiment of the present application.
  • Figure 13 is a schematic diagram of the distribution of microphones and speakers provided by an embodiment of the present application.
  • Figure 14 is a schematic diagram of an application scenario provided by the embodiment of the present application.
  • first and second are used for descriptive purposes only and shall not be understood as implying or implying relative importance or implicitly specifying the quantity of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of the features.
  • plurality refers to two Or more than two.
  • references in this specification to "one embodiment” or “some embodiments” or the like mean that in this application one or Various embodiments include specific features, structures, or characteristics described in connection with the embodiments. Therefore, the phrases “in one embodiment”, “in some embodiments”, “in other embodiments”, “in other embodiments”, etc. appearing in different places in this specification are not necessarily References are made to the same embodiment, but rather to “one or more but not all embodiments” unless specifically stated otherwise.
  • the terms “including,” “includes,” “having,” and variations thereof all mean “including but not limited to,” unless otherwise specifically emphasized.
  • This application provides a wearing detection method, which is applied to a head-mounted device including a microphone, a processor and a speaker.
  • the head-mounted device may send a first ultrasonic wave through a speaker and receive a second ultrasonic wave through a microphone, where the second ultrasonic wave is at least a part of the first ultrasonic wave received by the microphone.
  • the head mounted device When the difference between the amplitude of the first ultrasonic wave and the amplitude of the second ultrasonic wave is the first value, the head mounted device is configured to be in the first state.
  • the difference between the amplitude of the first ultrasonic wave and the amplitude of the second ultrasonic wave is the second value, the head mounted device is configured to be in the second state.
  • the first value and the second value are different.
  • the first state may be a worn state
  • the second state may be a non-worn state (also called an unworn state).
  • the power consumption of the head-mounted device in the first state is greater than the power consumption of the head-mounted device in the second state.
  • the working mode of the head-mounted device may include a first mode and a second mode, and the first mode and the second mode are different.
  • the power consumption of the head-mounted device in the first mode is greater than the power consumption of the head-mounted device in the second mode.
  • the head-mounted device can stop the running of background programs (including refreshing and downloading of background programs, etc.), pause audio playback, and reduce the volume of the played audio. , disconnect communication connections with other electronic devices, etc.
  • the head mounted device When the head mounted device is configured in the first state, it operates in the first mode.
  • the head mounted device When the head mounted device is configured in the second state, it operates in the second mode.
  • the existing components of the head-mounted device can be used for wearing detection, saving costs, and since the head-mounted device does not add other components for wearing detection, no additional components will be added.
  • the weight of the head-mounted device reduces the pressure of the head-mounted device on the user's cervical spine. Since ultrasonic waves cannot be heard by the human ear and do not cause harm to the human body, the wearing status of the head-mounted device can be senselessly detected without the user noticing.
  • head-mounted devices that apply the wearing detection method provided by this application can better automatically control the working mode of the head-mounted device based on application scenarios and more intelligently.
  • the head-mounted device can automatically switch between the first mode and the second mode based on the detected wearing condition of the head-mounted device to reduce the energy consumption of the head-mounted device. For example, if the head-mounted device has been worn When the headset is in the worn state, the audio file is played. When the headset detects that the headset switches from the worn state to the unworn state, the audio file can be automatically paused. Moreover, when the head-mounted device detects that the head-mounted device switches from the unworn state to the worn state again, the audio file can be automatically continued to be played. In this way, it can not only prevent the head-mounted device from continuing to play the audio file when it is not worn, but also reduce the power consumption of the head-mounted device.
  • the head-mounted device can also continue to play the audio file when the head-mounted device switches to the worn state again.
  • the part that the user has not yet listened to improves the user experience.
  • the application interface is displayed.
  • the screen can be turned off (also called screen off, screen off).
  • the screen can be lit and the content displayed before the screen can continue to be displayed.
  • the head-mounted device involved in the embodiment of the present application can be glasses including speakers, microphones and processors, such as smart eyes, worn on the user's head, in addition to having the optical correction, visual light adjustment or decoration functions of ordinary glasses. , and can also have communication functions.
  • the head-mounted device can establish a communication connection with other electronic devices (such as mobile phones, computers, etc.), and the communication connection can include wired connections and wireless connections.
  • the wireless connection can be a short-distance transmission technology such as a wireless fidelity (Wi-Fi) connection or a Bluetooth (bluetooth) connection.
  • the wired connection can be a universal serial bus (USB) connection, a high definition multimedia interface interface, HDMI) connection, etc. This embodiment does not limit the type of communication connection.
  • the head-mounted device can transmit data to other electronic devices through these communication connections. For example, when a communication connection is established between the head-mounted device and the communication device, when the communication device is on the phone with other communication devices, the call can be answered through the head-mounted device.
  • the head-mounted device can be inserted into a chip provided by a mobile operator (for example, a subscriber identity module (SIM) card), and the chip can be used to answer and make calls, and so on.
  • SIM subscriber identity module
  • the head-mounted device involved in the embodiment of the present application can also be other head-mounted devices, for example, it can be a head-mounted device with augmented reality (AR), virtual reality (VR) or mixed reality (MR). ) and other technologies, or smart helmets, or head-mounted (head-mounted) headphones, etc., the embodiments of the present application do not limit this.
  • AR augmented reality
  • VR virtual reality
  • MR mixed reality
  • the head-mounted device can pause/stop the task being performed by the head-mounted device when it detects that the user is not wearing the head-mounted device based on the wearing detection method. For example, when the user is on a call, Detect user taking off headset, pause call, etc.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100 .
  • the electronic device 100 is exemplified as glasses including a microphone and a speaker.
  • the electronic device 100 may include a glasses body 101 and a microphone 106 , a speaker 107 , a processor (not shown), and the like provided on the glasses body.
  • the glasses body 101 may include temples 102, glasses frames 103, a display device 104 and nose pads 105.
  • the display device 104 is embedded in the glasses frame 103 .
  • the temples 102 are used to support the user to wear the electronic device 100 on the head.
  • the spectacle frame 103 includes two spectacle frames
  • the spectacle legs 102 include two temples
  • the two temples are respectively arranged at the rear positions of the two spectacle frames
  • the nose pads 105 are arranged in the middle of the two spectacle frames.
  • the display device 104 is used for the user to view real-world objects and/or virtual images.
  • the display device 104 may be a transparent lens or a lens of other colors, a spectacle lens with an optical correction function, a lens with an adjustable filter function, sunglasses or other lenses with decorative effects.
  • the display device 104 may also be a display screen or a projection device that may generate optical signals and map the optical signals to the user's eyes. This embodiment does not limit the type of the display device 104.
  • there may be no display device 104 that is, the glasses body 101 only includes temples 102 , glasses frames 103 and nose pads 105 .
  • the display device 104 when the head-mounted device is AR glasses, the display device 104 includes both glasses lenses and a display screen or a projection device. In other embodiments, when the head-mounted device is VR glasses, the display device 104 is a display screen.
  • the microphone 106 is provided on the glasses body 101.
  • the microphone 106 can be provided on the temples 102 or the nose pads 105.
  • the microphone 106 is used to collect sound signals, such as the user's voice information.
  • the electronic device 100 can collect the user's voice information through the microphone 106, and analyze and generate corresponding control instructions. Alternatively, the electronic device 100 can collect the user's voice information through the microphone 106 and send it to other electronic devices for voice communication.
  • the speaker 107 is provided on the glasses body 101.
  • the speaker 107 can be provided on the temples 102 of the glasses.
  • Speaker 107 may be used to play audio.
  • a processor may be used to interpret signals or generate instructions, as well as process data, coordinate scheduling processes, etc.
  • the speaker 107 can be used to play ultrasonic waves
  • the microphone 106 can be used to collect the ultrasonic waves and send the collection results to the processor.
  • the processor can determine whether the user is wearing the electronic device 100 based on the ultrasonic signal collected by the microphone 106 .
  • FIG. 2 is a schematic diagram of the hardware structure of the electronic device 100 provided by the embodiment of the present application.
  • FIG. 2 takes the electronic device 100 as smart glasses as an example for illustration.
  • the embodiment of the present application does not place any restrictions on the specific type of the electronic device 100 .
  • the electronic device 100 is other electronic devices, such as VR/AR/MR glasses, headsets and other head-mounted devices, part of the hardware structure can be added or reduced.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, an interface 130, a charging management module 140, a power management module 141, a battery 142, a mobile communication module 150, a wireless communication module 160, an audio module 170, and a speaker 170A. , receiver 170B, microphone 170C, sensor module 180, motor 191, indicator 192, camera 193, display device 194, SIM card interface 196, etc.
  • the sensor module 180 may include a pressure sensor 180A, a touch sensor 180B, an inertial measurement unit 180C, and the like.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 is generally used to control the overall operation of the electronic device 100 and may include one or more processing units.
  • the processor 110 may include a central processing unit (CPU), an application processor (AP), a modem processor, a graphics processing unit (GPU), an image signal processor ( image signal processor (ISP), video processing unit (VPU), controller, memory, video codec, digital signal processor (digital signal processor (DSP)), baseband processor, and/or neural network processing (neural-network processing unit, NPU), etc.
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in multiple encoding formats, such as moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in processor 110 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 110 . If the processor 110 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 110 is reduced, thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and/or universal serial bus (USB) interface, serial Peripheral interface (serial peripheral interface, SPI) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • MIPI mobile industry processor interface interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 110 may include multiple sets of I2C buses.
  • the processor 110 can separately couple the touch sensor 180B, charger, flash, camera 193, etc. through different I2C bus interfaces.
  • the processor 110 can be coupled to the touch sensor 180B through an I2C interface, so that the processor 110 and the touch sensor 180B communicate through the I2C bus interface to implement the touch function of the electronic device 100 .
  • the I2S interface can be used for audio communication.
  • processor 110 may include multiple sets of I2S buses.
  • the processor 110 can be coupled with the audio module 170 through the I2S bus to implement communication between the processor 110 and the audio module 170 .
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through the I2S interface.
  • the PCM interface can also be used for audio communications to sample, quantize and encode analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is generally used to connect the processor 110 and the wireless communication module 160 .
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the UART interface to implement the function of playing audio.
  • the MIPI interface can be used to connect the processor 110 with peripheral devices such as the display device 194 and the camera 193 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 110 and the camera 193 communicate through the CSI interface to implement the shooting function of the electronic device 100 .
  • the processor 110 and the display device 194 communicate through the DSI interface to implement the display function of the electronic device 100 .
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 110 with the camera 193, the display device 194, the wireless communication module 160, the audio module 170, the sensor module 180, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that complies with the USB standard specification, and may be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transmit data between the electronic device 100 and peripheral devices. This interface can also be used to connect other electronic devices, such as mobile phones, PCs, smart TVs, etc.
  • the USB interface can be USB3.0, which is compatible with high-speed display port (DP) signal transmission and can transmit high-speed video and audio data.
  • DP display port
  • the interface connection relationships between the modules illustrated in the embodiments of the present application are only schematic illustrations and do not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 140 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 140 may receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive wireless charging input through the wireless charging coil of the electronic device 100 . While charging the battery 142, the charging management module 140 can also The electronic device is powered through the power management module 141 .
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140 to provide power to the processor 110, the internal memory 121, the display device 194, the camera 193, the wireless communication module 160, and the like.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be provided in the processor 110 .
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the electronic device 100 may include a wireless communication function.
  • the electronic device 100 may receive and play voice information from other electronic devices (such as a mobile phone or a cloud server).
  • the wireless communication function may be implemented through an antenna (not shown), a mobile communication module 150 or a wireless communication module 160, a modem processor (not shown), a baseband processor (not shown), and the like.
  • Antennas are used to transmit and receive electromagnetic wave signals.
  • Electronic device 100 may contain multiple antennas, each of which may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: the antenna can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 150 can provide the second generation (2th generation, 2G) network/the third generation (3th generation, 3G) network/the fourth generation (4th generation, 4G) network/the fifth generation network applied to the electronic device 100. (5th generation, 5G) network and other wireless communication solutions.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 150 can receive electromagnetic waves through an antenna, perform filtering, amplification, and other processing on the received electromagnetic waves, and transmit them to a modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna for radiation.
  • At least part of the functional modules of the mobile communication module 150 may be disposed in the processor 110 . In some embodiments, at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display device 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 110 and may be provided in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (WLAN) (such as Wi-Fi network), Bluetooth (bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS) ), frequency modulation (FM), near field communication (NFC), infrared (IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • Bluetooth blue, BT
  • global navigation satellite system global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves through the antenna, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna for radiation.
  • the wireless communication module 160 can be provided in the glasses body shown in FIG. 1 and used to transmit communication signals, including receiving and sending communication signals, such as voice information, control signaling, etc.
  • the electronic device 100 can establish communication connections with other electronic devices, such as mobile phones, computers, etc., through the wireless communication module 160 .
  • the antenna of the electronic device 100 is coupled to the mobile communication module 150 and the wireless communication module 160, This allows the electronic device 100 to communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code Wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM, and/or IR technology, etc.
  • GNSS can include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 100 may implement display functions through a GPU, a display device 194, an application processor, and the like.
  • the GPU is an image processing microprocessor that connects the display device 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or alter display information.
  • display device 194 is used for the user to view real-world objects or virtual images.
  • the display device 194 may be a transparent lens or a lens of other colors, a spectacle lens with an optical correction function, a lens with an adjustable filter function, sunglasses or other lenses with decorative effects.
  • the display device 194 may also be a display screen or a projection device that can generate optical signals and map the optical signals to the user's eyes for displaying images, videos, etc.
  • the display device 194 may be used to present one or more virtual objects, so that the electronic device 100 provides a virtual reality scene for the user.
  • the manner in which the display device 194 presents virtual objects may include one or more of the following:
  • the display device 194 may include a display screen, and the display screen may include a display panel.
  • the display panel can be used to display physical objects and/or virtual objects, thereby presenting a three-dimensional virtual environment to the user. Users can see the virtual object from the display panel and experience the virtual reality scene.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • display device 194 may include an optical projection device for projecting optical signals (eg, light beams) directly onto the user's retina.
  • the display device 194 can convert the real pixel image display into a virtual image display projected near the eye through one or more optical devices such as a reflector, a transmissive mirror or an optical waveguide, and the user can directly use the optical signal projected by the optical device. See virtual objects, feel the three-dimensional virtual environment, and achieve a virtual interactive experience, or an interactive experience that combines virtuality and reality.
  • the optical device may be a pico projector or the like.
  • the electronic device 100 may include 1 or N display devices 194, where N is a positive integer greater than 1.
  • the number of display devices 194 in the electronic device may be two, corresponding to the two eyes of the user. The content displayed on these two display devices can be displayed independently. These two display devices can display images with parallax to enhance the three-dimensional effect of the image. In some possible embodiments, the number of display devices 194 in the electronic device may also be one, and the user's two eyes view the same image.
  • This embodiment does not limit the type of the display device 194 .
  • the user uses other functions provided by the electronic device 100, excluding the display function.
  • some users wear smart glasses without lenses for decorative purposes, but they still have other functions such as receiving/playing audio signals.
  • Camera 193 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal oxide semiconductor (complementary metal-oxide-semiconductor, CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal oxide semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the camera 193 can be used in conjunction with an infrared device (such as an infrared transmitter) to detect the user's eye movements, such as eye gaze direction, blink operation, gaze operation, etc., thereby achieving eye tracking.
  • an infrared device such as an infrared transmitter
  • electronic device 100 may not include camera 193.
  • the electronic device 100 may also include an eye tracking module.
  • the eye tracking module may be used to track the movement of the human eye and thereby determine the gaze point of the human eye.
  • image processing technology can be used to locate the pupil position, obtain the pupil center coordinates, and then calculate the person's gaze point.
  • Internal memory 121 may be used to store computer executable program code, which includes instructions.
  • the processor 110 executes instructions stored in the internal memory 121 to execute various functional applications and data processing of the electronic device 100 .
  • the internal memory 121 may include a program storage area and a data storage area. Among them, the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 100 (such as audio data, etc.).
  • the internal memory 121 may be used to store application programs of one or more applications, the application programs including instructions.
  • the application program is executed by the processor 110, the electronic device 100 is caused to generate content for presentation to the user.
  • the application may include an application for managing the electronic device 100, such as a game application, a conference application, a video application, a desktop application or other applications, and so on.
  • the internal memory 121 may also include high-speed random access memory, non-volatile memory, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • non-volatile memory such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, and the application processor. For example, playing audio, collecting sound signals, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 170 may also be used to encode and decode audio signals. In some embodiments, the audio module 170 may be provided in the processor 110 , or some functional modules of the audio module 170 may be provided in the processor 110 .
  • Speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to hands-free calls.
  • the speaker 170A can be used to play sound wave signals that can be heard by human ears with frequencies in the range of 20 Hz to 20,000 Hz, also known as audible sound wave signals.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 answers a call or a voice message, the voice can be heard by bringing the receiver 170B close to the human ear.
  • Microphone 170C also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 170C with the human mouth and input the sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 100 can also be provided with three, four or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the speaker 170A can be used to send ultrasonic waves, where the ultrasonic waves are sound waves with a frequency exceeding 20,000 Hz. The ultrasonic waves cannot be heard by human ears and will not cause harm to the human body. It should be noted that the speaker 170A can play both audible sound wave signals and ultrasonic wave signals. In this way, since the ultrasonic signal and the audible sound wave signal have different frequency ranges and will not affect each other, the audible sound wave signal and the audible sound wave signal can be played by reusing the speaker 170A. Ultrasonic signals reduce the number of components required for wearable testing. Microphone 170C may be used to receive ultrasonic waves sent by speaker 170A.
  • the audio module 170 and/or the processor 110 may calculate (eg, Fourier transform) the ultrasonic signal received by the microphone 170C to obtain the amplitude of the received ultrasonic wave, and determine whether the user is wearing the electronic device 100 based on the amplitude. For example, in the case where the speaker 170A and the microphone 170C are respectively located on the two temples of the temples 102, when the user wears the electronic device 100, due to the user's head blocking the ultrasonic waves sent by the speaker 170A, the ultrasonic waves received by the microphone 170C The amplitude decreases.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on the amplitude of the ultrasonic signal sent by the speaker 170A and the amplitude of the ultrasonic signal collected by the microphone 170C.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the electronic device 100 the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also be customized.
  • the indicator 192 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, notifications, etc.
  • the electronic device 100 may also include other input and output interfaces, and other devices may be connected to the electronic device 100 through appropriate input and output interfaces.
  • Components may include, for example, audio/video jacks, data connectors, etc.
  • the electronic device 100 may also include one or more buttons that may control the electronic device and provide the user with access to functions on the electronic device 100 .
  • Keys can be in the form of buttons, switches, dials and other mechanical cases, or they can be touch or near-touch sensing devices (such as touch sensors).
  • the electronic device 100 may receive key inputs and generate key signal inputs related to user settings and function control of the electronic device 100 .
  • Keys can include power keys, volume keys, etc.
  • the electronic device 100 is equipped with one or more sensors, including but not limited to a pressure sensor 180A, a touch sensor 180B, an inertial measurement unit (IMU) 180C, a bone conduction sensor, etc.
  • sensors including but not limited to a pressure sensor 180A, a touch sensor 180B, an inertial measurement unit (IMU) 180C, a bone conduction sensor, etc.
  • IMU inertial measurement unit
  • the pressure sensor 180A is used to sense pressure signals and can convert the pressure signals into electrical signals.
  • pressure sensors 180A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations acting on the same touch location but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity smaller than the first pressure threshold acts on the pressure sensor 180A, an instruction to pause the audio is executed.
  • touch operations that act on the same touch location but have different touch operation durations may correspond to different operation instructions. For example: when a touch operation whose duration is less than the first time threshold is applied to the pressure sensor 180A, a confirmation instruction is executed. When a touch operation with a duration greater than or equal to the first time threshold is applied to the pressure sensor 180A, a power on/off instruction is executed.
  • Touch sensor 180B is also called a "touch device".
  • the touch sensor 180B is used to detect a touch operation on or near the touch sensor 180B.
  • Touch sensor 180B may pass the detected touch operation to the application processor to determine the touch event type.
  • the electronic device 100 may provide visual output related to the touch operation through the display device 194 .
  • the electronic device 100 may also send instructions corresponding to the touch operation to other electronic devices that establish communication connections.
  • IMU Inertial measurement unit 180C.
  • IMU is a sensor used to detect and measure acceleration and rotational motion, which can include acceleration Degree meter, angular velocity meter (or gyroscope), etc.
  • the accelerometer can detect the acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to identify the posture of the electronic device 100 and be used in somatosensory game scenarios, horizontal and vertical screen switching, pedometer and other applications.
  • the gyroscope can be used to determine the motion posture of the electronic device 100 .
  • the angular velocity of electronic device 100 about three axes may be determined by a gyroscope.
  • Gyroscopes can also be used for navigation, somatosensory game scenes, camera anti-shake, etc.
  • the electronic device 100 may track the movement of the user's head according to an IMU or the like.
  • the inertial measurement unit 180C may be used to detect whether the electronic device 100 moves.
  • the electronic device 100 may detect an operation of the user wearing the electronic device 100 based on an IMU or the like.
  • the electronic device 100 detects the wearing operation through a sensor such as an IMU, it can send ultrasonic waves through the speaker 170A, receive the ultrasonic waves through the microphone 170C, and perform wearing detection based on the received ultrasonic waves, that is, determine whether the user is wearing the electronic device 100.
  • the sensor module 180 may also include a capacitive sensor.
  • Capacitive sensors can be used to convert detected non-electrical quantities into electrical quantities.
  • the capacitive sensor may be disposed on the inside of the temple 102.
  • the capacitance value changes.
  • the electronic device 100 may detect an operation of the user wearing the electronic device 100 based on the capacitive sensor.
  • the electronic device 100 may, when detecting the wearing operation through the capacitive sensor, send ultrasonic waves through the speaker 170A, receive the ultrasonic waves through the microphone 170C, and perform wearing detection based on the received ultrasonic waves.
  • the sensor module 180 may also include a bone conduction sensor that may acquire vibration signals.
  • bone conduction sensors can acquire vibration signals from vibrating bone fragments in the human body.
  • the bone conduction sensor can be provided in the electronic device 100, and the audio module 170 can analyze the voice signal based on the vibration signal of the vocal vibrating bone obtained by the bone conduction sensor to implement the voice function.
  • Bone conduction sensors can also be used as audio playback devices to output sounds to the user.
  • the audio playback device is a bone conduction sensor
  • the two temples of the temples 102 may be provided with resisting parts, and the bone conduction sensor may be disposed at the position of the resisting parts.
  • the resisting portion When the user wears the electronic device 100, the resisting portion resists the skull in front of the ear, thereby generating vibrations so that sound waves are conducted to the inner ear via the skull and bony labyrinth.
  • the position of the resisting part is directly close to the skull, which can reduce vibration loss and allow users to hear audio more clearly.
  • the SIM card interface 196 is used to connect a SIM card.
  • the SIM card can be connected to or separated from the electronic device 100 by inserting it into the SIM card interface 196 or pulling it out from the SIM card interface 196 .
  • the electronic device 100 can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 196 can support Nano SIM card, Micro SIM card, SIM card, etc. Multiple cards can be inserted into the same SIM card interface 196 at the same time. The types of the plurality of cards may be the same or different.
  • the SIM card interface 196 can also be compatible with different types of SIM cards.
  • the SIM card interface 196 is also compatible with external memory cards.
  • the electronic device 100 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100 .
  • the microphone and speaker of the electronic device 100 are located on different components on the glasses body of the electronic device 100 .
  • the speaker of the electronic device 100 can send ultrasonic waves, and the microphone of the electronic device 100 can receive the ultrasonic waves sent by the speaker. Since the amplitude of the ultrasonic waves received by the electronic device 100 in the worn state is different from the amplitude of the ultrasonic waves received by the electronic device 100 in the unworn state, the electronic device 100 can determine whether the user is wearing the electronic device 100 based on the amplitude of the transmitted ultrasonic waves.
  • the microphone and the speaker of the electronic device 100 can be respectively located on the side facing the user on the two temples of the temples 102 shown in FIG. 1.
  • the microphone can be located on the left temple of the temples 102, and the speaker can be Bit On the right side of the temples 102.
  • the electronic device 100 is in an unworn state, as shown in (a) of FIG. 3 , the microphone and the speaker of the electronic device 100 are at different side positions, the ultrasonic waves sent by the speaker of the electronic device 100 are not blocked, and the microphone of the electronic device 100 The ultrasonic waves sent by most speakers can be received. At this time, the amplitude of the ultrasonic waves received by the microphone can be A1.
  • the microphone and the speaker of the electronic device 100 are at the same side, and the ultrasonic waves sent by the speaker of the electronic device 100 are blocked by the user's head, that is, the speaker Part of the transmitted ultrasonic waves is reflected by the user's head and cannot be received by the microphone.
  • the microphone of the electronic device 100 can only receive a small part of the ultrasonic waves sent by the speaker.
  • the amplitude of the ultrasonic waves received by the microphone may be A2. Among them, A1 is greater than A2.
  • the amplitude of the ultrasonic wave transmitted by the speaker of the electronic device 100 received by the microphone when the electronic device 100 is in the wearing state is greater than that received by the microphone when the electronic device 100 is in the unworn state.
  • the amplitude of ultrasound is small. In this way, the electronic device 100 can determine whether the user is wearing the electronic device 100 based on the amplitude of the ultrasonic signal.
  • the microphone and the speaker of the electronic device 100 are respectively located on different components of the electronic device 100 .
  • the microphone and the speaker of the electronic device 100 are blocked.
  • the electronic device 100 is in the unworn state, there is no obstruction between the microphone and the speaker of the electronic device 100 .
  • the electronic device 100 can send ultrasonic waves through a speaker, receive ultrasonic waves through a microphone, and determine whether the electronic device 100 is in a worn state or an unworn state based on the received ultrasonic waves.
  • the electronic device 100 can work in the first mode.
  • the electronic device 100 is not worn, it can work in the second mode.
  • the existing speakers and microphones of the electronic device 100 are used for wear detection, which saves manufacturing costs.
  • the electronic device 100 does not add other components for wear detection, the weight of the electronic device 100 will not be increased and the weight of the electronic device 100 will be reduced. Pressure on the user’s cervical spine. Since ultrasonic waves cannot be heard by human ears and do not cause harm to the human body, the wearing status of the electronic device 100 can be senselessly detected without the user being aware of it.
  • the electronic device 100 can better automatically control the working mode of the electronic device 100 more intelligently based on application scenarios. In some embodiments, when in the worn state, the electronic device 100 can work in the first mode.
  • the electronic device 100 When the electronic device 100 works in the first mode, the electronic device 100 can play audio normally and establish communication with other electronic devices. Data communication connections, etc. are convenient for users to use. When not being worn, the electronic device 100 can work in the second mode. When the electronic device 100 works in the second mode, the electronic device 100 can pause audio playback and disconnect communication with other electronic devices. connection, etc., to reduce the power consumption of the electronic device 100.
  • the display method includes the following steps:
  • step S402 can be performed.
  • the electronic device 100 is not limited to being started when the electronic device 100 receives input from the user to turn on the electronic device 100 .
  • the electronic device 100 switches from the standby mode or the sleep mode to the working mode, it can also be understood that the electronic device 100 is started.
  • the electronic device 100 detects that the temples of the electronic device 100 switch from a folded state (that is, the two temples of the electronic device 100 are attached to each other) to an unfolded state (that is, the two temples of the electronic device 100 are separated from each other, as shown in FIG. 1), the electronic device 100 starts.
  • the electronic device 100 can detect the movement of the temples based on sensors such as IMU, and determine that the temples are switched to the unfolded state.
  • the electronic device 100 may start when a change in the position of the electronic device 100 is detected by a sensor such as an IMU.
  • a proximity sensor eg, proximity light sensor, capacitive sensor, infrared sensor, IMU sensor, etc.
  • a proximity sensor is provided on the side of the eye body of the electronic device 100 close to the user.
  • the electronic device 100 determines that the user is close to the electronic device 100 through the proximity sensor, the electronic device 100 starts.
  • the electronic device 100 is connected to the charging compartment, the electronic device 100 is placed inside the charging compartment, and when the electronic device 100 is separated from the charging compartment, the electronic device 100 starts.
  • the speaker of the electronic device 100 sends ultrasonic waves.
  • the speaker of the electronic device 100 may transmit ultrasonic signals.
  • the ultrasonic signal sent by the electronic device 100 may be called a detection signal, a target ultrasonic signal, and so on.
  • the electronic device 100 can obtain the detection signal in the following ways.
  • the electronic device 100 stores M types of detection signals, where M is a positive integer.
  • the electronic device 100 may send one of the M detection signals.
  • the M types of detection signals have different waveforms.
  • different waveforms can be understood as one or more differences in frequency, duty cycle, etc.
  • the difference in frequency of the two detection signals means that the frequencies of the two detection signals are different.
  • the frequency of one detection signal is f1 and the frequency of another detection signal is f2. If f1 and f2 are different, the two detection signals are different.
  • the difference in frequency of the two detection signals can be understood as the different frequency composition and/or arrangement of the two detection signals.
  • the first detection signal consists of a waveform with a frequency of f1 and a waveform with a frequency of The waveform composition of f2.
  • the second detection signal consists of a waveform with frequency f2 and a waveform with frequency f1.
  • the third detection signal consists of a waveform with frequency f1 and a waveform with frequency f3.
  • Both the first detection signal and the second detection signal include a waveform with frequency f1 and a waveform with frequency f2, but the order of arrangement of the two detection signals is different, and the two detection signals are different.
  • the composition of the waveforms included in the first detection signal and the third detection signal is different.
  • the first detection signal includes a waveform with a frequency of f2 and does not include a waveform with a frequency of f3.
  • the third detection signal includes a waveform with a frequency of f3. Waveforms, excluding the waveform with frequency f2, the two detection signals have different compositions, and the two detection signals are different.
  • the duty cycle is the ratio of the waveform with zero amplitude in the waveform of one cycle to the waveform of one cycle.
  • the duty cycle can be expressed as a percentage or a fraction, and the value range is between 0 and 1.
  • the difference in the duty cycle of the two detection signals means that the duty cycle values of the two detection signals are different.
  • the duty cycle of one detection signal is w1
  • the duty cycle of another detection signal is w2. If w1 and w2 are different, the two detection signals are different.
  • the difference in duty ratios of the two detection signals can be understood as the different composition and/or arrangement of the duty ratios of the two detection signals.
  • the difference between the two waveforms also includes that the position of the waveform with an amplitude of 0 is different within one cycle. Specifically, when the duty cycles of the two detection signals are both 25%, the waveform of the first detection signal with an amplitude of 0 is in the first T/4 time of a cycle, and the amplitude of the second detection signal is The waveform of 0 is in the last T/4 time of a cycle, and the two detection signals are different.
  • a waveform with amplitude 0 always occurs at the beginning or end of a cycle.
  • the difference between the two detection signals is that the amplitudes of the two detection signals are different.
  • the electronic device 100 can identify the M detection signals with C m , m ⁇ 1,2,...,M ⁇ .
  • the waveform diagrams of the M detection signals can be seen in the waveform diagram example shown in FIG. 5 .
  • the duty cycle of the detection signal c 1 is 0%, and the period is T 1 .
  • the duty cycle of the detection signal c 2 is 0%, and the period is T 2 .
  • the duty cycle of the detection signal c m-1 is 25%, and the period is T 1 .
  • the duty cycle of the detection signal c m is 25%, and the period is T 2 .
  • the periods (ie, frequencies) of the detection signal c 1 and the detection signal c 2 are different, the duty ratios of the detection signal c 1 and the detection signal c m-1 are different, and the duty ratios of the detection signal c 1 and the detection signal c m and cycles are different. That is to say, the amplitude and/or frequency of any two ultrasonic waves in the ultrasonic wave set C m are not the same. same.
  • all detection signals among the M detection signals have the same amplitude and duty cycle, but different frequencies.
  • the electronic device 100 stores the frequencies and amplitudes of M detection signals, and the electronic device 100 can obtain the ultrasonic signal played by the speaker based on the frequency and amplitude.
  • the electronic device 100 can store the ultrasonic signal in the form of a list.
  • the detection signal c 1 can be represented by ⁇ c 1 , 21000, 100 ⁇ , where c 1 is the identifier of the ultrasonic signal. 21000 is the frequency of the ultrasonic signal, and the unit of this frequency value can be Hz. 100 is the amplitude of the ultrasonic signal, and the unit of the amplitude value may be decibel (dB).
  • the electronic device 100 can determine the frequency and amplitude of the ultrasonic signal based on the list, that is, determine the waveform of the ultrasonic signal. Similarly, the electronic device 100 stores the frequencies, duty cycles and amplitudes of M detection signals, and the electronic device 100 can obtain the ultrasonic signal played by the speaker based on the frequency, duty cycle and amplitude.
  • the electronic device 100 stores digital audio information corresponding to M waveforms of detection signals.
  • the electronic device 100 can obtain an analog audio signal corresponding to the digital audio signal, that is, an ultrasonic signal based on the stored digital audio information.
  • the electronic device 100 When the electronic device 100 sends one of the M detection signals, it can randomly select a segment of the M detection signals to send. For example, the electronic device 100 can send C i in C m , where i is greater than 1 and less than or equal to m. Alternatively, the electronic device 100 may sequentially send one of the M types of detection signals in the order in which the M types of detection signals are stored. For example, the electronic device 100 may send the detection signal c 1 when performing step S402 for the first time, and may send the detection signal c 2 when performing step S402 for the second time, and so on.
  • the electronic device 100 can receive nearby sound wave signals through a microphone before sending the detection signal, and filter out detection signals with waveforms different from the nearby sound wave signals from the stored detection signals. and send the detection signal.
  • the following uses frequency as an example to introduce the specific steps for the electronic device 100 to determine the detection signal. Specifically, as shown in Figure 6:
  • the electronic device 100 receives nearby sound wave signals.
  • the electronic device 100 may receive nearby sound wave signals through a microphone.
  • the electronic device 100 determines whether the nearby acoustic wave signals include N types of signals among the M types of stored ultrasonic waves, 0 ⁇ N ⁇ M.
  • the electronic device 100 may determine whether the nearby sound wave signal includes N types of detection signals among the M types of detection signals based on the frequency of the received sound wave signal. Specifically, the electronic device 100 can compare the frequencies of nearby acoustic wave signals and M types of detection signals one by one, and determine N types of signals among the M types of detection signals that have the same frequency as the nearby acoustic wave signals. Among them, 0 ⁇ N ⁇ M. Among the M types of detection signals, N types of signals that have the same frequency as the nearby acoustic wave signal are N types of signals included in the nearby acoustic wave signal.
  • the nearby acoustic wave signal and the frequency of the detection signal can be considered to be the same.
  • step S603 When the electronic device 100 determines that the nearby ultrasonic wave signal includes N types of signals among the M types of stored detection signals, step S603 may be performed. When the electronic device 100 determines that the nearby ultrasonic wave signal does not include N types of signals among the M types of stored detection signals, step S604 may be performed.
  • the nearby acoustic wave signal and the detection signal are considered to be the same.
  • the difference between the frequency of the nearby acoustic wave signal and the frequency of the detection signal is less than the first frequency difference, the frequency of the nearby acoustic wave signal and the frequency of the detection signal are the same.
  • the amplitude of a nearby acoustic signal is detected and When the difference in amplitude of the signals is less than the preset amplitude difference (for example, 10 dB), the amplitude of the nearby acoustic wave signal is the same as the amplitude of the detection signal.
  • the electronic device 100 sends any one of the M types of detection signals except the detected N types of detection signals.
  • the electronic device 100 can exclude N types of detection signals that are the same as nearby acoustic wave signals among the M types of detection signals, and select any one of the remaining detection signals for transmission.
  • the electronic device 100 sends any one of the M detection signals.
  • the electronic device 100 may randomly send any one of the detection signals.
  • the electronic device 100 may acquire the frequency of a nearby acoustic wave signal, and send detection signals of M types of detection signals whose frequencies are different from those of the nearby acoustic wave signal.
  • the M detection signals have the same amplitude but different frequencies.
  • the electronic device 100 can randomly generate detection signals.
  • the electronic device 100 can randomly generate the frequency and amplitude values of the detection signal, and then generate a corresponding detection signal based on the frequency value and amplitude value.
  • the frequency value of the detection signal is within a specified frequency range (for example, 20000Hz-24000Hz), and the amplitude value of the detection signal is within a specified amplitude range (for example, 80dB-120dB).
  • the electronic device 100 may acquire nearby sound wave signals through a microphone and determine the frequency of the nearby sound wave signals before generating the detection signal. A detection signal having a different frequency from the nearby sound wave signal is regenerated.
  • the electronic device 100 may send a prefix signal before sending the detection signal.
  • the electronic device 100 can determine based on the prefix signal that the prefix signal belongs to the ultrasonic signal sent by the electronic device 100, that is, determine the sent detection signal.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on the prefix signal and the detection signal.
  • the prefix signal and the detection signal can be the same or different.
  • the length of the blank time interval is a fixed value, for example, it can be 2ms.
  • FIG. 7 shows an example image of a prefix signal.
  • the prefix signal and the detection signal are the same, and there is a blank time interval between the prefix signal and the detection signal.
  • the prefix signal and the detection signal are different, and the prefix signal and the detection signal are connected.
  • the prefix signal and the detection signal are different, and there is a blank time interval between the prefix signal and the detection signal.
  • the electronic device 100 can determine based on the waveform of the prefix signal that the prefix signal and the detection signal are signals for detecting whether the user is wearing the electronic device 100, and determine whether the user is wearing the electronic device 100 based on the determined prefix signal and the detection signal. Avoid nearby sound wave signals from interfering with the detection results.
  • the electronic device 100 can determine the detection signal based on the prefix signal, and then perform wearing detection based on the detection signal.
  • the detection signal sent by the electronic device 100 can be obtained by splicing multiple ultrasonic signals with different frequencies.
  • the electronic device 100 can jointly detect whether the user is wearing the electronic device 100 through multiple segments of ultrasonic signals with different frequencies, thereby further ensuring the accuracy of the detection results.
  • the electronic device 100 can compose the detection signal based on multiple different waveforms. For example, when the electronic device 100 includes two waveforms, waveform A and waveform B, the electronic device 100 can determine the detection signal by sorting the combination of waveform A and waveform B. For example, when a section of detection signal consists of 4 sections of waveforms, when the detection signal consists of waveform A, waveform A, waveform B and waveform A, the detection signal can be expressed as AABA. It can be understood that when the electronic device 100 only includes two waveforms, the two waveforms can be identified by binary numbers to facilitate understanding.
  • waveform A is identified by binary number 0 and waveform B is identified by binary number 1, then the detection signal AABA can be expressed as 0010.
  • the M types of detection signals stored by the electronic device 100 can be expressed as storing M types of strings and the waveform corresponding to each character in the string.
  • the electronic device 100 may only store N characters and N types of waveforms corresponding to the N characters.
  • the electronic device 100 can randomly generate a character string composed of any of the N characters to obtain a detection signal.
  • the microphone of the electronic device 200 receives ultrasonic waves.
  • nearby sound wave signals may be received through the microphone, and the nearby sound wave signals include the detection signal sent by the speaker.
  • the electronic device 100 determines whether the amplitude of the received ultrasonic wave exceeds the first threshold.
  • the electronic device 100 can perform Fourier transform processing on the received detection signal to obtain the amplitude value of the received detection signal.
  • the electronic device 100 may perform step S406.
  • the electronic device 100 may perform step S405.
  • the value of the first threshold may be a product of the amplitude value of the sent detection signal and the first coefficient.
  • the first coefficient may be a fractional value greater than 0 and less than or equal to 1. In some embodiments, the first coefficient may be a percentage between 50% and 80%.
  • the value of the first threshold is a fixed value, and the fixed value can be set by the manufacturer of the electronic device 100 .
  • the electronic device 100 can convert the analog audio signal into a digital audio signal, and perform Fourier transform processing on the digital audio signal to obtain the frequency sum of the nearby audio signal. Correspondence between amplitudes.
  • the electronic device 100 may determine the received detection signal based on the frequency of the transmitted detection signal. Then based on the corresponding relationship between the frequency and amplitude of the nearby audio signal, the amplitude of the received detection signal is determined.
  • the electronic device 100 may determine whether the user is wearing the electronic device 100 based on whether the amplitude value is greater than the first threshold.
  • an acoustic wave signal having the same frequency as the transmitted detection signal is a detection signal. In some embodiments, when the difference in frequency of the two signals is less than the first frequency difference, the frequencies of the two signals are the same.
  • the amplitude value of the received detection signal can be expressed as x[n], and the formula for performing discrete Fourier transform on x[n] is:
  • X[k] is the amplitude value of the n-th sampling point in the detection signal with frequency k
  • x[n] is the amplitude value of the n-th sampling point in the detection signal
  • N is the total number of sampling points
  • j is plural.
  • the amplitude of the detection signal is between 20dB-150dB, for example, it can be 100dB.
  • the frequency of the detection signal is between 20,000 Hz and 40,000 Hz, for example, it may be 20,000 Hz.
  • the microphone of the electronic device 100 can collect detection based on the 160KHz sampling frequency. Signal.
  • the time domain image of the detection signal sent by the speaker and the detection signal collected by the microphone in 0.2ms can be shown in Figure 8A.
  • the waveform of the detection signal sent by the electronic device 100 within 0.2 ms can be shown in (a) in FIG. 8A .
  • the electronic device 100 samples the detection signal shown in (a) in FIG. 8A within 0.2 ms, and obtains 32 sampling points.
  • the 32 sampling points constitute the discrete waveform shown in (b) in FIG. 8A .
  • the formula for Fourier transform of the discrete waveform obtained by the microphone of the electronic device 100 is as follows:
  • the total number of sampling points N is 32.
  • the frequency domain image of the detection signal can be obtained.
  • the frequency domain image of the detection signal received by the electronic device 100 may be as shown in FIG. 8B.
  • the frequency domain image of the detection signal shown in Figure 8B It can be determined that the frequency of the detection signal collected by the microphone of the electronic device 100 is 20000 Hz and the amplitude is 100 dB.
  • images of sound wave signals collected by the microphone of the electronic device 100 in some practical application scenarios are introduced.
  • the frequency of the detection signal sent by the electronic device 100 is 20000 Hz and the amplitude is 100 dB
  • the value of the first threshold is 60 dB.
  • the acoustic wave signal with a frequency of 20000 Hz among the acoustic wave signals collected by the electronic device 100 is the detection signal.
  • the frequency of the detection signal is 20000Hz and the amplitude is 100dB.
  • the amplitude of the detection signal received by the electronic device 100 is greater than the first threshold.
  • the electronic device 100 may determine that the user is not wearing the electronic device 100 .
  • the values of the amplitude of the detection signal received by the electronic device 100 and the amplitude of the detection signal sent by the electronic device 100 shown in (a) of FIG. 8C are only examples. Since there is a medium between the microphone and the speaker, The amplitude value of the received detection signal may be less than or equal to the amplitude value of the sent detection signal, which is not limited in this embodiment of the present application.
  • the acoustic wave signal with a frequency of 20000 Hz among the acoustic wave signals collected by the electronic device 100 is the detection signal.
  • the frequency of the detection signal is 20000Hz and the amplitude is 50dB.
  • the value of the amplitude of the detection signal received by the electronic device 100 is less than the first threshold.
  • the electronic device 100 may determine that the user has worn the electronic device 100 .
  • the electronic device 100 may set the amplitude value of the sent detection signal and the first threshold based on the power of the electronic device 100 . For example, when the power of the electronic device 100 is low (for example, less than 20%), the electronic device 100 can reduce the amplitude of the sent detection signal and the value of the first threshold, thereby reducing the power consumption of the electronic device 100 for wearing detection, The power consumption of the electronic device 100 is further saved.
  • the electronic device 100 may determine that the user is not wearing the electronic device 100 and may perform step S406.
  • the electronic device 100 may determine that the user has worn the electronic device 100 and perform step S405.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal. Specifically, when the difference between the amplitude of the sent detection signal and the received detection signal is within the first range, it is determined that the electronic device 100 is in the unworn state. When the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal is not within the first range, it is determined that the electronic device 100 is in a worn state.
  • the value in the first range may be a preset value, or a value obtained based on the amplitude of the sent detection signal.
  • the first range may be [0, A*x], where x is a coefficient greater than 0 and less than 1.
  • x can be 0.4.
  • the first range is [0,40]. If the amplitude value of the received detection signal is between 60dB and 100dB, the amplitude difference between the sent detection signal and the received detection signal is in the first range, and the electronic device 100 is in the unworn state. If the amplitude value of the received detection signal is between 0dB and 59dB, the amplitude difference between the sent detection signal and the received detection signal is not within the first range, and the electronic device 100 is in a worn state.
  • the first range can be expressed as [0, A*x) or (0, A*x), where x is a coefficient greater than 0 and less than 1.
  • the first range may be expressed as [k, A*x) or (k, A*x), where k is greater than or equal to 0 and less than A*x.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal. Specifically, when the difference between the amplitude of the sent detection signal and the received detection signal is the first value, it is determined that the electronic device 100 is in the worn state. When the difference between the amplitude of the sent detection signal and the received detection signal is the second value, it is determined that the electronic device 100 is in the non-wearing state.
  • the first value range is [0, A*x] or (0, A*x]
  • the second value range is (A*x, 100] or (A*x , 100).
  • A is the amplitude value of the sent detection signal
  • x is a coefficient greater than 0 and less than 1.
  • the first value of The value range may be 0dB-40dB
  • the second value may be in the range 40dB-100dB.
  • the first value range is [k, A*x] or (k, A*x]
  • the second value range is (A*x, p] or (A*x, p ).
  • k is greater than or equal to 0 and less than A*x
  • p is greater than A*x and less than or equal to 100.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the amplitude of the received detection signal and the percentage of the amplitude of the sent detection signal. Specifically, when the percentage of the amplitude of the received detection signal and the amplitude of the sent detection signal is the third value, it is determined that the electronic device 100 is in the worn state. When the difference between the amplitude of the received detection signal and the amplitude of the transmitted detection signal is the fourth value, it is determined that the electronic device 100 is in the non-wearing state.
  • the third value has a value range of [0%, y%] or (0%, y%], and the fourth value has a value range of (y%, 100%] or (y%, 100%).
  • y is greater than 0 and less than 100. For example, when y is 60, the third value can range from 0% to 60%, and the second value can range from 60% to 100% .
  • the value range of the third value is [a%, y%] or (a%, y%], and the value range of the fourth value is (y%, b%] or (y%, b% ).
  • a is greater than or equal to 0 and less than y
  • b is greater than y and less than or equal to 100.
  • the electronic device 100 is in the worn state, and the electronic device 100 works in the first mode.
  • the electronic device 100 determines that the electronic device 100 is in a worn state based on the amplitude of the received detection signal in steps S402 to S404.
  • the electronic device 100 operates in the first mode. Since the electronic device 100 in the first mode consumes more power than the second mode, it can execute the user's instructions more quickly and efficiently, making it easier for the user to use.
  • step S405 After the electronic device 100 executes step S405, it may continue to execute step S402.
  • step S402 after the electronic device 100 performs step S405, it may perform step S402 again after a preset idle time (for example, 20 ms). In this way, the energy consumption of the electronic device 100 for transmitting/receiving ultrasonic waves can be reduced.
  • a preset idle time for example, 20 ms
  • the speaker of the electronic device 100 can play audible sound wave signals in the frequency range between 20 Hz and 20000 Hz, the user can hear the audible sound wave signals in this frequency range. .
  • the electronic device 100 may periodically send the detection signal and the audible sound wave signal in turn. Specifically, the electronic device 100 can send a detection signal in the first playback cycle, and send an audible sound wave signal in the second playback cycle. The electronic device 100 can send a detection signal in the third playback cycle, and send an audible sound wave signal in the fourth playback cycle. Audible sound wave signals,..., etc. The durations of the multiple playback periods may be the same or different.
  • the electronic device 100 may divide each 1 second into 30 33 ms during the process of transmitting the ultrasonic signal. Among them, within each 33ms, the electronic device 100 can play 23ms of audible sound wave signals and then play 10ms of detection signals. In this way, due to the phenomenon of auditory persistence, a user can misunderstand an audible sound wave signal played at intervals as being played continuously. It should be noted that the current division method is only an example provided by the embodiment of the present application, and should not limit the transmission time of the detection signal and the audible sound wave signal.
  • the electronic device 100 may be provided with X speakers, where X is an integer greater than or equal to 2.
  • the electronic device 100 may send detection signals through Y speakers among the X speakers for wearing detection every preset idle time (for example, 20 ms), where Y is greater than 0 and less than M.
  • Y is greater than 0 and less than M.
  • all of the X speakers of the electronic device 100 except the speaker that sends the detection signal can continue to send audible sound wave signals. It should be noted that within the preset idle time, the electronic device 100 can send audible sound wave signals through all microphones.
  • the Y speakers may send detection signals in the first time period, send audible sound wave signals in the second time period, and so on.
  • the electronic device 100 is in the unworn state, and the electronic device 100 works in the second mode.
  • the electronic device 100 determines the electronic device 100 based on the amplitude of the received detection signal in steps S402 to S404. In the unworn state, the electronic device 100 works in the second mode. Compared with the first mode, the electronic device 100 in the second mode can stop the running of background programs (including refreshing and downloading of background programs, etc.), pause audio playback or reduce the volume of audio playback, lower the display brightness of the display, etc. . Reduce the power consumption of the electronic device 100.
  • background programs including refreshing and downloading of background programs, etc.
  • step S406 After the electronic device 100 executes step S406, it may continue to execute step S402.
  • step S406 after the electronic device 100 performs step S406, it may perform step S402 again after a preset idle time (for example, 20 ms). In this way, the energy consumption of the electronic device 100 for transmitting/receiving ultrasonic waves can be reduced.
  • a preset idle time for example, 20 ms
  • the electronic device 100 may include an ultrasonic transmitting sensor and an ultrasonic receiving sensor.
  • the electronic device 100 may transmit an ultrasonic signal through the ultrasonic transmitting sensor and receive the ultrasonic signal through an ultrasonic receiving sensor.
  • the electronic device 100 determines the result of the wearing detection based on the ultrasonic signal.
  • the positions of the microphone and the speaker are not limited to those shown in FIG. 3 , as long as the microphone and the speaker of the electronic device 100 are on different components, and the amplitude of the detection signal received when the electronic device 100 is in the worn state is smaller than that when the electronic device 100 is in the worn state. Based on the amplitude of the detection signal received in the unworn state, the electronic device 100 can determine the wearing state of the electronic device 100 through the wearing detection method shown in FIG. 4 .
  • the location of the microphone or speaker of the electronic device 100 may be located on different components such as the left temple, the right temple, the left eye frame, the right eye frame, the nose pad, etc.
  • the microphone of the electronic device 100 is located on the nose pad, and the speaker is located on the temple of the glasses. It can be understood that the position shown in (a) in FIG. 9 is not limited, and the microphone and speaker of the electronic device can be located in other positions.
  • the speaker of the electronic device 100 is located on the nose pads, the microphone is located on the temples, etc. This invention The application examples do not limit this.
  • the number of speakers of the electronic device 100 may be more than one.
  • the microphone of the electronic device 100 is located on the nose pad, the speaker A is located on the left temple, and the speaker B is located on the right temple. It can be understood that, without being limited to the components shown in (b) in FIG. 9 , the microphone and speaker of the electronic device may be located on other components.
  • the speaker A of the electronic device 100 may be located on the nose pad, and the microphone may be located on the left side of the glasses. Leg, speaker B can be located on the right temple.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals sent by the multiple speakers. If the amplitude of at least one detection signal sent by the plurality of speakers received by the electronic device 100 is greater than the first threshold, it can be determined that the user is not wearing the electronic device 100 . That is to say, only when the amplitude of the detection signals sent by all speakers received by the electronic device 100 is less than or equal to the first threshold, the electronic device 100 can determine that the user is wearing the electronic device 100 . In this way, when part of the speakers of the electronic device 100 is blocked by a mistaken touch, the electronic device 100 can also determine whether the user is wearing the electronic device 100 .
  • the amplitude of at least one detection signal sent by the plurality of speakers received by the electronic device 100 is greater than or equal to the first threshold, it may be determined that the user is not wearing the electronic device 100 . If the amplitude of the detection signals sent by all speakers received by the electronic device 100 is less than the first threshold, it can be determined that the user is wearing the electronic device 100 .
  • the frequency of the ultrasonic waves sent by each speaker is different.
  • the electronic device 100 can control the phases of the ultrasonic waves sent by the multiple speakers to form an ultrasonic wave beam, so that the direction of the resulting ultrasonic wave beam is toward the microphone.
  • the energy of the beam can be concentrated in the direction of the microphone, and more accurate wearing detection results can be obtained through the amplitude of the ultrasonic waves received by the microphone.
  • the number of microphones of the electronic device 100 may be more than one.
  • the microphone A of the electronic device 100 is located on the nose pad, the microphone B is located on the right temple, and the speaker is located on the left temple. Understandable It should be noted that the position shown in (c) in FIG. 9 is not limited. The microphone and the speaker of the electronic device may be located at other positions.
  • the speaker of the electronic device 100 is located on the nose pad, the microphone A is located on the left temple, and the microphone B is located on the left temple. Right temple.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals received by the multiple microphones. If the amplitude of at least one detection signal among the detection signals received by the multiple microphones of the electronic device 100 is greater than the first threshold, it can be determined that the user is not wearing the electronic device 100 . That is to say, only when the amplitude of the detection signals received by all microphones of the electronic device 100 is less than or equal to the first threshold, the electronic device 100 can determine that the user is wearing the electronic device 100 . In this way, when part of the microphone of the electronic device 100 is blocked by accidental touch, the electronic device 100 can also determine whether the user is wearing the electronic device 100 .
  • the amplitude of at least one detection signal among the detection signals received by the multiple microphones of the electronic device 100 is greater than or equal to the first threshold, it may be determined that the user is not wearing the electronic device 100 . If the amplitude of the detection signals received by all microphones of the electronic device 100 is less than the first threshold, it can be determined that the user is wearing the electronic device 100 .
  • the number of microphones and speakers of the electronic device 100 may be more than one.
  • the microphone A of the electronic device 100 is located on the nose pad
  • the microphone B is located on the right temple
  • the speakers A and B are located on the left temple.
  • the position shown in (d) in FIG. 9 is not limited, and the microphone and speaker of the electronic device may be located at other positions.
  • microphone A and microphone B of the electronic device 100 are located on the nose pad
  • speaker A is located on the left side.
  • the speaker B is located on the right temple of the spectacles, which is not limited in the embodiment of the present application.
  • the microphone and speaker of the electronic device 100 are located on the same component on the eye body of the electronic device 100 .
  • the speaker of the electronic device 100 can send ultrasonic waves, and the microphone of the electronic device 100 can receive the ultrasonic waves sent by the speaker.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the amplitude of the transmitted ultrasonic wave and the amplitude of the received ultrasonic wave.
  • the microphone and the speaker of the electronic device 100 may be located on the same side of the temple 102 shown in FIG. 1 (for example, the right temple).
  • the microphone and the speaker of the electronic device 100 are on the same component, and the ultrasonic waves sent by the speaker of the electronic device 100 are not blocked, and most of the ultrasonic waves are directed toward Transmitting in all directions, the microphone of the electronic device 100 can only receive the ultrasonic waves sent by a small part of the speakers. At this time, the amplitude of the ultrasonic waves received by the microphone can be A3.
  • the electronic device 100 is in the wearing state, as shown in (b) of FIG.
  • the microphone and the speaker of the electronic device 100 are in the same component.
  • the ultrasonic waves sent by the speaker of the electronic device 100 are blocked by the user's head, and most of the ultrasonic waves When reflected to the microphone, the microphone of the electronic device 100 can receive most of the ultrasonic waves sent by the speakers.
  • the amplitude of the ultrasonic waves received by the microphone can be A4.
  • A3 is smaller than A4. It should be noted that it is not limited to the right temple, the microphone and the speaker can also be located together at the left temple, or at the nose pad, or at the spectacle frame, etc. This is not limited in the embodiment of the present application.
  • the amplitude of the ultrasonic wave sent by the speaker of the electronic device 100 received by the microphone when the electronic device 100 is in the wearing state is greater than the ultrasonic wave received by the microphone when the electronic device 100 is in the unworn state.
  • the amplitude is large. In this way, the electronic device 100 can determine whether the user is wearing the electronic device 100 based on the amplitude of the ultrasonic waves received in the wearing state and the unworn state.
  • the electronic device 100 may determine that the electronic device 100 is in the worn state when it is determined that the amplitude of the ultrasonic wave received by the microphone is greater than the first threshold. When it is determined that the amplitude of the ultrasonic wave received by the microphone is less than or equal to the first threshold, it is determined that the electronic device 100 is in an unworn state. Alternatively, the electronic device 100 may determine that the electronic device 100 is in the worn state when it is determined that the amplitude of the ultrasonic wave received by the microphone is greater than or equal to the first threshold. When it is determined that the amplitude of the ultrasonic wave received by the microphone is less than the first threshold, it is determined that the electronic device 100 is in an unworn state.
  • the display method includes the following steps:
  • step S1102 can be performed.
  • the speaker of the electronic device 100 sends ultrasonic waves.
  • the speaker of the electronic device 100 may transmit ultrasonic signals.
  • the ultrasonic signal sent by the electronic device 100 may be called a detection signal, a target ultrasonic signal, and so on.
  • the microphone of the electronic device 200 receives ultrasonic waves.
  • nearby sound wave signals may be received through the microphone, and the nearby sound wave signals include the detection signal sent by the speaker.
  • steps S1101 to S1103 please refer to the embodiment shown in FIG. 4 and will not be described again here.
  • the electronic device 100 determines whether the amplitude of the received ultrasonic wave exceeds the first threshold.
  • the electronic device 100 can perform Fourier transform processing on the received detection signal to obtain the amplitude value of the received detection signal.
  • the electronic device 100 may perform step S1105.
  • the electronic device 100 may perform step S1106.
  • the value of the first threshold may be a product of the amplitude value of the sent detection signal and the first coefficient.
  • the first coefficient may be a fractional value greater than 0 and less than or equal to 1. In some embodiments, the first coefficient may be any percentage between 50% and 80%.
  • the electronic device 100 may determine that the user has worn the electronic device 100 and may perform step S1105.
  • the electronic device 100 may determine that the user is not wearing the electronic device 100 and perform step S1106.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal. Specifically, when the electronic device 100 determines that the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal is in the first range, it is determined that the electronic device 100 is in the worn state. When the electronic device 100 determines that the difference between the amplitude of the sent detection signal and the received detection signal is not in the first range, it is determined that the electronic device 100 is in the unworn state.
  • the value in the first range may be a preset value, or may be obtained based on the amplitude of the sent detection signal.
  • the first range may be [0, A*x], where x is a coefficient greater than 0 and less than 1.
  • x may be 0.4.
  • the first range is [0,40]. If the amplitude value of the received detection signal is between 60dB and 100dB, the amplitude difference between the sent detection signal and the received detection signal is in the first range, and the electronic device 100 is in a worn state. If the amplitude value of the received detection signal is between 0dB and 59dB, the amplitude difference between the sent detection signal and the received detection signal is not within the first range, and the electronic device 100 is in an unworn state.
  • the first range can be expressed as [0, A*x) or (0, A*x), where x is a coefficient greater than 0 and less than 1.
  • the first range may be expressed as [k, A*x) or (k, A*x), where k is greater than or equal to 0 and less than A*x.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal. Specifically, when the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal is the first value, it is determined that the electronic device 100 is in the non-wearing state. When the difference between the amplitude of the sent detection signal and the amplitude of the received detection signal is the second value, it is determined that the electronic device 100 is in the worn state.
  • the first value range is [0, A*x] or (0, A*x]
  • the second value range is (A*x, 100] or (A*x , 100).
  • A is the amplitude value of the sent detection signal
  • x is a coefficient greater than 0 and less than 1.
  • the first value of The value range can be 0dB-40dB
  • the second value may range from 40dB to 100dB.
  • the first value range is [k, A*x] or (k, A*x]
  • the second value range is (A*x, p] or (A*x, p ).
  • k is greater than or equal to 0 and less than A*x
  • p is greater than A*x and less than or equal to 100.
  • the electronic device 100 may determine the wearing state of the electronic device 100 based on the amplitude of the received detection signal and the percentage of the amplitude of the sent detection signal. Specifically, when the percentage of the amplitude of the received detection signal and the amplitude of the sent detection signal is the third value, it is determined that the electronic device 100 is in the non-wearing state. When the difference between the amplitude of the received detection signal and the amplitude of the sent detection signal is the fourth value, it is determined that the electronic device 100 is in the worn state.
  • the third value has a value range of [0%, y%] or (0%, y%], and the fourth value has a value range of (y%, 100%] or (y%, 100%).
  • y is greater than 0 and less than 100. For example, when y is 60, the third value can range from 0% to 60%, and the fourth value can range from 60% to 100%. .
  • the value range of the third value is [a%, y%] or (a%, y%], and the value range of the fourth value is (y%, b%] or (y%, b% ).
  • a is greater than or equal to 0 and less than y
  • b is greater than y and less than or equal to 100.
  • the electronic device 100 is in the worn state, and the electronic device 100 works in the first mode.
  • the electronic device 100 determines that the electronic device 100 is in a worn state based on the amplitude of the received detection signal in steps S1102 to S1104.
  • the electronic device 100 operates in the first mode. Compared with the second mode, the electronic device 100 in the first mode can execute the user's instructions more quickly and efficiently, which is convenient for the user to use.
  • step S1105 After the electronic device 100 executes step S1105, it may continue to execute step S1102.
  • step S1102 after the electronic device 100 performs step S1105, it may perform step S1102 again after a preset idle time (for example, 20 ms). In this way, the energy consumption of the electronic device 100 for transmitting/receiving ultrasonic waves can be reduced.
  • a preset idle time for example, 20 ms
  • the electronic device 100 is in the unworn state, and the electronic device 100 works in the second mode.
  • the electronic device 100 determines that the electronic device 100 is in an unworn state based on the amplitude of the received detection signal in steps S1102 to S1104, and the electronic device 100 operates in the second mode. Compared with the first mode, the electronic device 100 in the second mode can stop the running of background programs (including refreshing and downloading of background programs, etc.), pause audio playback or reduce the volume of audio playback, lower the display brightness of the display, etc. . Reduce the power consumption of the electronic device 100.
  • background programs including refreshing and downloading of background programs, etc.
  • step S1106 After the electronic device 100 executes step S1106, it may continue to execute step S1102.
  • the electronic device 100 may perform step S1102 again after a preset idle time (for example, 20 ms). In this way, the energy consumption of the electronic device 100 for transmitting/receiving ultrasonic waves can be reduced.
  • a preset idle time for example, 20 ms
  • the electronic device 100 may further confirm the wearing status of the electronic device 100 based on other sensors (eg, proximity sensor, IMU, etc.) after determining the wearing status of the electronic device 100 based on the detection signal.
  • the electronic device 100 may operate in the first mode when it is determined that the user is wearing the electronic device 100 based on the detection signal and other sensors.
  • the electronic device 100 may operate in the second mode when it is determined that the user is not wearing the electronic device 100 based on the detection signal or other sensors.
  • the electronic device 100 may include an ultrasonic transceiver sensor, and the electronic device 100 may send or receive ultrasonic signals through the ultrasonic transceiver sensor. The electronic device 100 then determines the result of the wearing detection based on the ultrasonic signal.
  • the electronic device 100 may disconnect the communication connection with the electronic device 200 in order to reduce power consumption.
  • the working mode of the electronic device 100 is the first mode
  • the electronic device 100 can establish a communication connection with the electronic device 200. Therefore, when the working mode of the electronic device 100 is switched from the second mode to the first mode, a communication connection can be established with the electronic device 200 .
  • the working mode of the electronic device 100 switches from the first mode to the second mode, if a communication connection is established between the electronic device 100 and the electronic device 200, the electronic device 100 can disconnect the communication connection established with the electronic device 200.
  • the electronic device 200 may be a tablet computer, a mobile phone, a desktop computer, a laptop computer, a handheld computer, a notebook computer, a vehicle-mounted device, a smart home device and/or a smart city device, etc.
  • the second mode when the electronic device 100 is in an unworn state, the second mode can be turned on, and when the electronic device 100 detects that the electronic device 100 is in a worn state through a detection signal, the second mode can be switched to the first mode.
  • the power consumption of the electronic device 100 in the second mode is less than the power consumption of the electronic device 100 in the first mode. In this way, the electronic device 100 can work in the second mode when it is not worn, which reduces the power consumption of the electronic device 100.
  • the electronic device 100 can work in the first mode when it is worn, making it easier for the user to use.
  • the electronic device 100 is not limited to ultrasonic waves, and the electronic device 100 can also implement the above wearing detection method through other signals.
  • other signals may include but are not limited to infrasound waves, infrared rays, visible light, etc.
  • the positions of the microphone and the speaker are not limited to those shown in FIG. 10 , as long as the microphone and the speaker of the electronic device 100 are on the same component, and the amplitude of the detection signal received when the electronic device 100 is in the worn state is greater than that when the electronic device 100 is in the worn state. Based on the amplitude of the detection signal received in the unworn state, the electronic device 100 can determine the wearing state of the electronic device 100 through the wearing detection method shown in FIG. 11 .
  • the position of the microphone or speaker of the electronic device 100 may be located on the same components such as the left temple, the right temple, the left eye frame, the right eye frame, the nose pad, etc. As shown in (a) of FIG. 12 , the microphone and speaker of the electronic device 100 are located on the nose pad. It can be understood that the position shown in (a) in FIG. 12 is not limited. The microphone and speaker of the electronic device may be located at other positions, such as the right temple, etc. This is not limited in the embodiment of the present application.
  • the number of speakers of the electronic device 100 may be more than one.
  • the microphone, speaker A, and speaker B of the electronic device 100 are all located on the right temple. It can be understood that the components are not limited to the components shown in (b) of FIG. 12 , and the microphone and speaker of the electronic device may be located on other components at the same time, such as the left temple, nose pads, etc.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals sent by the multiple speakers. If the amplitude of at least one detection signal sent by the plurality of speakers received by the electronic device 100 is less than the first threshold, it can be determined that the user is not wearing the electronic device 100 . That is to say, only when the amplitude of the detection signals sent by all speakers received by the electronic device 100 is greater than or equal to the first threshold, the electronic device 100 can determine that the user is wearing the electronic device 100 . In this way, when part of the speakers of the electronic device 100 is blocked by a mistaken touch, the electronic device 100 can also determine whether the user is wearing the electronic device 100 .
  • the amplitude of at least one detection signal sent by the plurality of speakers received by the electronic device 100 is less than or equal to the first threshold, it may be determined that the user is not wearing the electronic device 100 . If the amplitude of the detection signals sent by all speakers received by the electronic device 100 is greater than the first threshold, it can be determined that the user is wearing the electronic device 100 .
  • the frequency of the ultrasonic waves sent by each speaker is different.
  • the number of microphones of the electronic device 100 may be more than one.
  • microphone A, microphone B and the speaker of the electronic device 100 are all located on the right temple. It can be understood that, without being limited to the position shown in (c) in FIG. 12 , the microphone and speaker of the electronic device may be located at other positions, such as the right temple.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals received by the multiple microphones. If the amplitude of at least one detection signal among the detection signals received by the multiple microphones of the electronic device 100 is less than the first threshold, it may be determined that the user is not wearing the electronic device 100 . That is to say, only when the amplitude of the detection signals received by all microphones of the electronic device 100 is greater than or equal to the first threshold, the electronic device 100 can determine that the user is wearing the electronic device 100 . In this way, when part of the microphone of the electronic device 100 is blocked by accidental touch, the electronic device 100 can also determine whether the user is wearing the electronic device 100 .
  • the amplitude of the detection signals received by all microphones of the electronic device 100 is greater than the first threshold, it can be determined that the user is wearing the electronic device 100 .
  • the number of microphones and speakers of the electronic device 100 may be more than one.
  • microphone A, microphone B, speaker A and speaker B of the electronic device 100 are all located on the right temple. It can be understood that the position shown in (d) in FIG. 12 is not limited.
  • the microphone and speaker of the electronic device may be located at other positions, such as the left temple of the glasses, etc. This is not limited in the embodiment of the present application.
  • the microphone B of the electronic device 100 can perform wearing detection based on the received detection signal sent by the speaker A and the detection side signal sent by the speaker B.
  • the microphone A of the electronic device 100 can also perform wearing detection based on the received detection signal sent by the speaker A and the speaker.
  • the detection side signals sent by B are separately tested for wearing, making the results of the wearing detection more accurate.
  • the electronic device 100 includes a microphone and a speaker located in the same component, or a microphone and a speaker located in different components.
  • the electronic device 100 can jointly determine the wearing status of the electronic device 100 in combination with the wearing detection methods provided in FIG. 4 and FIG. 9 . In this way, the electronic device 100 can more accurately determine the wearing state of the electronic device 100 .
  • the number of speakers of the electronic device 100 may be more than one. As shown in (a) of FIG. 13 , among the microphones of the electronic device 100 , the speaker B is located on the right temple, and the speaker A is located on the left temple.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals sent by the multiple speakers. Among them, the electronic device 100 can determine the wearing state of the electronic device 100 through the detection method shown in FIG. 11 based on the detection signal received by the microphone and sent by the speaker (for example, speaker B) located on the same component. The electronic device 100 can determine the wearing state of the electronic device 100 through the detection method shown in FIG. 4 based on the detection signals received by the microphone and sent by speakers (eg, speaker A) located on different components. It should be noted that only when it is determined that the electronic device 100 is in the worn state based on the detection signals sent by all speakers, it can be determined that the electronic device 100 is in the worn state.
  • the frequency of the ultrasonic waves sent by each speaker is different.
  • the number of microphones of the electronic device 100 may be more than one. As shown in (b) of FIG. 13 , the microphone B and the speaker of the electronic device 100 are located on the right temple, and the microphone A is located on the left temple.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals received by the multiple microphones. Among them, the electronic device 100 can determine the wearing state of the electronic device 100 through the detection method shown in FIG. 11 based on the detection signal received by the microphone (for example, microphone B) and sent by the speaker located on the same component as the microphone. The electronic device 100 can determine the wearing state of the electronic device 100 through the detection method shown in FIG. 4 based on the detection signal received by the microphone (eg, microphone A) and sent by the speaker located on a different component. It should be noted that only based on the detection signals received by all microphones, it is determined that the power Only when the sub-device 100 is in the worn state can it be determined that the electronic device 100 is in the worn state.
  • the number of microphones and speakers of the electronic device 100 may be more than one.
  • the microphone A and the speaker A of the electronic device 100 are both located on the left temple, and the microphone B and the speaker B are both located on the right temple.
  • the microphone B of the electronic device 100 can perform wearing detection based on the received detection signal sent by the speaker A and the detection side signal sent by the speaker B.
  • the microphone A of the electronic device 100 can also perform wearing detection based on the received detection signal sent by the speaker A and the speaker.
  • the detection side signals sent by B are separately tested for wearing, making the results of the wearing detection more accurate.
  • the electronic device 100 can determine whether the user is wearing the electronic device 100 based on detection signals received by the multiple microphones. Among them, the electronic device 100 can determine the wearing status of the electronic device 100 through the detection method shown in FIG. 11 based on the detection signal received by the microphone (for example, microphone B) and sent by the speaker (for example, speaker B) located on the same component as the microphone. , and at the same time, based on the detection signal received by the microphone and sent by a speaker (for example, speaker A) located on a different component, the wearing state of the electronic device 100 is determined through the detection method shown in FIG. 4 .
  • the electronic device 100 can determine the wearing status of the electronic device 100 through the detection method shown in FIG. 11 based on the detection signal received by the microphone (for example, microphone A) and sent by the speaker (for example, speaker A) located on the same component as the microphone. , and at the same time, based on the detection signal received by the microphone and sent by a speaker (for example, speaker B) located on a different component, the wearing state of the electronic device 100 is determined through the detection method shown in FIG. 4 . It should be noted that only when it is determined that the electronic device 100 is in the worn state based on the detection signals received by all microphones, it can be determined that the electronic device 100 is in the worn state.
  • FIG. 13 is not limited, and the microphone and speaker of the electronic device may be located in other components, which is not limited in the embodiment of the present application.
  • the microphone and speakers of electronic device 100 are located on different parts of the helmet.
  • the electronic device 100 can detect the wearing status of the helmet through the wearing detection method shown in FIG. 4 .
  • the microphone and speakers of electronic device 100 are located on the same part of the helmet.
  • the electronic device 100 can detect the wearing status of the helmet through the wearing detection method shown in FIG. 11 .
  • a helmet can be divided into five parts: front, rear, top, left, and right.
  • the front part can be a partial area in contact with the user's forehead
  • the rear part can be a partial area in contact with the back of the user's head
  • the top can be a partial area in contact with the top of the user's head
  • the left part can be in contact with the user's left ear.
  • Part of the area, the right part can be the part of the area that is in contact with the user's bait.
  • the partial distribution of the helmet is only an example, and this application does not limit it.
  • the display device of the electronic device 100 is a display screen or a projection device.
  • the electronic device 100 may be an AR device.
  • the electronic device 100 may send a detection signal through a speaker and receive the detection signal sent by the speaker through a microphone.
  • the electronic device 100 determines that the electronic device 100 is in the worn state based on the received detection signal, it can perform operations corresponding to the worn state, for example, setting the working mode to the first mode.
  • the electronic device 100 determines that the electronic device 100 is in the unworn state based on the received detection signal, it can perform an operation corresponding to the worn state, for example, setting the working mode to the second mode. In this way, based on the wearing detection method, the power consumption of the electronic device 100 can be reduced.
  • the steps for the electronic device 100 to perform wearing detection can refer to the embodiment shown in FIG. 4 .
  • the steps for the electronic device 100 to perform wearing detection may refer to the embodiment shown in FIG. 11 .
  • the electronic device 100 when the working mode of the electronic device 100 is the second mode, the electronic device 100 may pause the playback of video images in order to reduce power consumption.
  • the working mode of the electronic device 100 is the first mode, in order for the user to watch Video, the electronic device 100 can play video images. Therefore, when the working mode of the electronic device 100 is switched from the second mode to the first mode, the paused video file can be continued to be played.
  • the electronic device 100 may pause playing the video file.
  • the electronic device 100 is in a worn state, and the electronic device 100 plays a video file.
  • the electronic device 100 may display the video page 1201 on the left display device and the video page 1202 on the right display device.
  • Video pages 1201 and 1202 include video images.
  • the electronic device 100 can pause the playback of the video file. As shown in (b) of FIG. 14 , the electronic device 100 is in an unworn state, and the electronic device 100 pauses playing the video file. Wherein, the electronic device 100 can display the video pause icon 1211 on the video page 1201 on the left display device, and display the video pause icon 1212 on the video page 1202 on the right display device. Among them, the video page 1201 and the video page 1202 include video images.
  • the video pause icon 1211 and the video pause icon 1212 may be used to prompt the user that the electronic device 100 has paused playing the video file. In this way, the electronic device 100 can pause the video file when switching from the worn state to the unworn state, without requiring the user to manually pause the video, thereby reducing user operations and making it easier for the user to use.
  • the video pause icon 1211 and the video pause icon 1212 as shown in (b) of FIG. 14 may be displayed.
  • the electronic device 100 determines that the electronic device 100 switches from the unworn state to the worn state based on the wearing detection method provided by the embodiment of the present application, the electronic device 100 can continue to play the video file.
  • the electronic device 100 can cancel the display of the video pause icon 1211 and the video pause icon 1212 and continue playing the video file, as shown in (a) of FIG. 14 . In this way, the electronic device 100 can continue to play the paused video file when switching from the unworn state to the worn state.
  • the user does not need to manually play the video, which reduces user operations and facilitates user use.
  • the electronic device 100 when the electronic device 100 includes a display screen or a projector, when the working mode of the electronic device 100 is the second mode, in order to reduce power consumption, the electronic device 100 can pause the screen in the second mode (also known as Turn off the screen, turn off the screen).
  • the working mode of the electronic device 100 is the first mode
  • the user can view the display content of the electronic device 100 .
  • the electronic device 100 can turn on the screen. Therefore, when the working mode of the electronic device 100 is switched from the second mode to the first mode, the screen can be turned on to display the application interface.
  • the electronic device 100 can pause the screen to save power consumption.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)

Abstract

本申请公开了一种佩戴检测方法及相关装置。本申请提供了一种头戴式设备,包括:第一扬声器(107),麦克风(106),处理器;第一扬声器(107),用于发送第一超声波;麦克风(106),用于接收第二超声波,第二超声波为麦克风接收的第一超声波的至少一部分;当第一超声波的振幅和第二超声波的振幅之间的差值为第一值时,头戴式设备被配置为处于第一状态;当第一超声波的振幅和第二超声波的振幅之间的差值为第二值时,头戴式设备被配置为处于第二状态,其中,第一值与第二值不同。这样,本申请实施例提供的头戴式设备可以更好地基于应用场景,更加智能地自动化控制头戴式设备的工作模式,减少头戴式设备的功耗。

Description

一种佩戴检测方法及相关装置
本申请要求于2022年03月28日提交中国专利局、申请号为202210310203.5、申请名称为“一种佩戴检测方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种佩戴检测方法及相关装置。
背景技术
随着用户消费升级,头戴式设备市场规模逐渐庞大。头戴式设备的目的是探索全新的人机交互方式,智能设备通过穿戴在人体之上为消费者提供专属的、多功能、个性化、更便利的服务。
目前,无论用户是否佩戴头戴式设备,都不会影响头戴式设备的工作状态,例如,当头戴式设备正在播放音频时,用户摘下该头戴式设备,该设备依旧会继续播放该音频。这样,会增加设备的耗能。
发明内容
本申请提供了一种佩戴检测方法及相关装置。头戴式设备可以通过扬声器发送超声波,通过麦克风接收超声波,并基于发送的超声波和接收的超声波,确定出头戴式设备的佩戴状态。实施本申请提供的显示方法,可以基于头戴式设备现有的器件进行佩戴检测,节约成本,并且由于头戴式设备没有增加其他用于佩戴检测的元器件,不会增加头戴式设备的重量,减少头戴式设备对用户颈椎的压力。由于超声波不能被人耳听到且不会对人体造成伤害,可以在用户没有察觉的情况下无感检测头戴式设备的佩戴情况。
第一方面,本申请实施例提供了一种头戴式设备,包括:第一扬声器,麦克风,处理器;
第一扬声器,用于发送第一超声波;麦克风,用于接收第二超声波,第二超声波为麦克风接收的第一超声波的至少一部分;当第一超声波的振幅和第二超声波的振幅之间的差值为第一值时,头戴式设备被配置为处于第一状态;当第一超声波的振幅和第二超声波的振幅之间的差值为第二值时,头戴式设备被配置为处于第二状态,其中,第一值与第二值不同。
这样,本申请实施例提供的头戴式设备可以更好地基于应用场景,更加智能地自动化控制头戴式设备的工作模式。
在一种可能的实现方式中,第一状态为已佩戴状态,第二状态为非佩戴状态。
在一种可能的实现方式中,头戴式设备处于第一状态的耗电量大于头戴式设备处于第二状态的耗电量。
这样,头戴式设备被配置为处于第二状态,头戴式设备的耗能降低,既可以节约头戴式设备的耗电量,延长头戴式设备的续航时间,也不会影响用户正常使用该头戴式设备。例如,头戴式设备被配置为处于已佩戴状态时,显示应用的界面,当头戴式设备被配置为未佩戴状态时,可以息屏(又称为灭屏,熄屏)。并且,当头戴式设备再次被配置为已佩戴状态时,可以点亮屏幕,继续显示息屏前显示的内容。
在一种可能的实现方式中,麦克风位于第一部件,第一扬声器位于第二部件,第一部件和第二部件不同,第一值大于第二值。
在一种可能的实现方式中,第一值的取值范围为40分贝-100分贝,和/或,第二值的取值范围为0分贝-40分贝。
在一种可能的实现方式中,麦克风和第一扬声器都位于第一部件,第一值小于第二值。
在一种可能的实现方式中,第一值的取值范围为0分贝-40分贝,和/或,第二值的取值范围为40分贝-100分贝。
在一种可能的实现方式中,麦克风,还用于在第一扬声器发送第一超声波之前,接收非第一扬声器发送的第三超声波;第一超声波被配置为与第三超声波不同。
这样,由于头戴式设备提前获取了环境中的第三超声波,并将第一超声波配置为和第三超声波不同,当头戴式设备附近还存在其他电子设备时,其他电子设备发出的超声波信号也不会影响头戴式设备的佩戴检测功能。
在一种可能的实现方式中,第一超声波被配置为与第三超声波不同,包括:第一超声波被配置为与第三超声波的频率不同和/或占空比不同。
在一种可能的实现方式中,第一超声波与第三超声波的频率不同,包括:第一超声波的频率和第三超声波的频率的差值大于第一频率差值。这样,可以避免将第三超声波误认为第一超声波,提高检测结果的准确度。
在一种可能的实现方式中,头戴式设备还包括第二扬声器;第二扬声器,用于发送可听声波信号,可听声波信号的频率和第一超声波的频率不同。这样,头戴式设备包括多个扬声器,可以满足用户需求(例如,听音乐、打电话等等)的同时,在用户没有察觉的情况下,检测头戴式设备的佩戴状态。
在一些实施例中,第一超声波的频率大于20000Hz,可听声波信号的频率大于0且小于等于20000Hz。
在一种可能的实现方式中,第一扬声器,具体用于在第一时间段内发送第一超声波;第二扬声器,具体用于在第一时间段内发送可听声波信号。这样,头戴式设备可以同时发送第一超声波和可听声波信号。
在一种可能的实现方式中,第一扬声器,具体用于在第一时间段内发送第一超声波,在第二时间段内发送可听声波信号,可听声波信号的频率和第一超声波的频率不同。
在一种可能的实现方式中,第一扬声器,还用于在第三时间段内发送第一超声波,在第四时间段内发送可听声波信号,第二时间段在第一时间段之后,第三时间段在第二时间段之后,第四时间段在第三时间段之后。这样,头戴式设备可以在发送第一超声波期间,间隔发送第一超声波和可听声波信号。由于听觉暂留现象,用户可以将间隔播放的可听声波信号误会为连续播放的,不影响用户使用头戴式设备的音频播放功能。
在一种可能的实现方式中,所述第一时间段和所述第二时间段周期性间隔发送,所述第一时间段的范围包括5ms至15ms,所述第二时间段的范围包括20ms至40ms。例如,第一时间段包括第一扬声器发送第一超声波期间每33ms中的前10ms,第二时间段包括33ms中的后23ms,即第一扬声器发送第一超声波10ms,第一扬声器发送可听声波信号23ms;之后,第一扬声器再发送第一超声波10ms,第一扬声器再发送可听声波信号23ms;如上循环发送第一超声波和可听声波信号。既可以保证第一超声波的发送,也不会影响用户感知到的可听声波信号的连续性。
在一种可能的实现方式中,第一扬声器,还用于在发送第一超声波之前,发送前缀信号,前缀信号用于标识第一超声波。这样,可以便于头戴式设备基于前缀信号,识别第一超声波。
在一种可能的实现方式中,头戴式设备的设备类型包括以下任意一种:智能眼镜,头罩 式耳机,增强现实AR眼镜,虚拟显示VR眼镜,混合现实MR眼镜,智能头盔。
在一种可能的实现方式中,头戴式设备为眼镜,第一部件为左侧眼镜腿,第二部件为右侧眼镜腿,或者,第一部件为右侧眼镜腿,第二部件为左侧眼镜腿,或者,第一部件为鼻托,第二部件为左侧眼镜腿,或者,第一部件为鼻托,第二部件为右侧眼镜腿。
在一种可能的实现方式中,头戴式设备为AR眼镜,处理器,还用于在头戴式设备被配置为处于第一状态或第二状态之前,播放第一视频;处理器,还用于在头戴式设备被配置为处于第一状态之后,继续播放第一视频;处理器,还用于在头戴式设备被配置为处于第二状态之后,暂停播放第一视频。
这样,既可以避免头戴式设备在未佩戴状态时继续播放该第一视频,减少头戴式设备的耗电量,还可以在头戴式设备再次切换为已佩戴状态时,继续播放音频文件中用户还未观看的部分,提升用户体验。
在一种可能的实现方式中,处理器,还用于在头戴式设备被配置为处于第一状态或第二状态之前,播放第一音频;处理器,还用于在头戴式设备被配置为处于第一状态之后,继续播放第一音频;处理器,还用于在头戴式设备被配置为处于第二状态之后,暂停播放第一音频。
这样,既可以避免头戴式设备在未佩戴状态时继续播放该音频文件,减少头戴式设备的耗电量,还可以在头戴式设备再次切换为已佩戴状态时,继续播放音频文件中用户还未收听的部分,提升用户体验。
在一种可能的实现方式中,头戴式设备还包括接近传感器,接近传感器包括电容传感器、惯性测量单元;接近传感器,用于在检测到用户靠近头戴式设备的操作后,通知第一扬声器发送第一超声波。
这样,可以进一步提高佩戴检测结果的准确性。
第二方面,本申请实施例提供了一种佩戴检测方法,应用于包括麦克风和第一扬声器的头戴式设备,其特征在于,方法包括:头戴式设备通过第一扬声器发送第一超声波;头戴式设备通过麦克风接收第二超声波,第二超声波为麦克风接收的第一超声波的至少一部分;当第一超声波的振幅和第二超声波的振幅之间的差值为第一值时,头戴式设备被配置为处于第一状态;当第一超声波的振幅和第二超声波的振幅之间的差值为第二值时,头戴式设备被配置为处于第二状态,其中,第一值与第二值不同。
在一种可能的实现方式中,第一状态为已佩戴状态,第二状态为非佩戴状态。
在一种可能的实现方式中,头戴式设备处于第一状态的耗电量大于头戴式设备处于第二状态的耗电量。
在一种可能的实现方式中,麦克风位于第一部件,第一扬声器位于第二部件,第一部件和第二部件不同,第一值大于第二值。
在一种可能的实现方式中,第一值的取值范围为40分贝-100分贝,和/或,第二值的取值范围为0分贝-40分贝。
在一种可能的实现方式中,麦克风和第一扬声器都位于第一部件,第一值小于第二值。
在一种可能的实现方式中,第一值的取值范围为0分贝-40分贝,和/或,第二值的取值范围为40分贝-100分贝。
在一种可能的实现方式中,在头戴是设备通过第一扬声器发送第一超声波之前,方法还包括:头戴式设备通过麦克风接收非第一扬声器发送的第三超声波,第一超声波被配置为与 第三超声波不同。
在一种可能的实现方式中,第一超声波被配置为与第三超声波不同,包括:第一超声波被配置为与第三超声波的频率不同和/或占空比不同。
在一种可能的实现方式中,第一超声波与第三超声波的频率不同,包括:第一超声波的频率和第三超声波的频率的差值大于第一频率差值。
在一种可能的实现方式中,头戴式设备还包括第二扬声器;方法还包括:头戴式设备通过第二扬声器发送可听声波信号,可听声波信号的频率和第一超声波的频率不同。
在一种可能的实现方式中,方法还包括:头戴式设备在第一时间段内通过第一扬声器发送第一超声波;头戴式设备在第一时间段内通过第二扬声器发送可听声波信号。
在一种可能的实现方式中,头戴式设备通过第一扬声器发送第一超声波,具体包括:头戴式设备在第一时间段内通过第一扬声器发送第一超声波;头戴式设备在第二时间段内通过第一扬声器发送可听声波信号。
在一种可能的实现方式中,头戴式设备通过第一扬声器发送第一超声波,具体包括:头戴式设备在第一时间段内通过第一扬声器发送第一超声波;头戴式设备在第二时间段内通过第一扬声器发送可听声波信号;头戴式设备在第三时间段内通过第一扬声器发送第一超声波;头戴式设备在第四时间段内通过第一扬声器发送可听声波信号。第二时间段在第一时间段之后,第三时间段在第二时间段之后,第四时间段在第三时间段之后。
在一种可能的实现方式中,所述第一时间段和所述第二时间段周期性间隔发送,所述第一时间段的范围包括5ms至15ms,所述第二时间段的范围包括20ms至40ms。例如,第一时间段包括第一扬声器发送第一超声波期间每33ms中的前10ms,第二时间段包括33ms中的后23ms。
在一种可能的实现方式中,在头戴式设备通过第一扬声器发送第一超声波之前,方法还包括:头戴式设备通过第一扬声器发送前缀信号,前缀信号用于标识第一超声波。
在一种可能的实现方式中,头戴式设备的设备类型包括以下任意一种:智能眼镜,头罩式耳机,增强现实AR眼镜,虚拟显示VR眼镜,混合现实MR眼镜,智能头盔。
在一种可能的实现方式中,头戴式设备为眼镜,第一部件为左侧眼镜腿,第二部件为右侧眼镜腿,或者,第一部件为右侧眼镜腿,第二部件为左侧眼镜腿,或者,第一部件为鼻托,第二部件为左侧眼镜腿,或者,第一部件为鼻托,第二部件为右侧眼镜腿。
在一种可能的实现方式中,头戴式设备为AR眼镜,方法还包括:在头戴式设备被配置为处于第一状态或第二状态之前,头戴式设备播放第一视频;在头戴式设备被配置为处于第一状态之后,头戴式设备继续播放第一视频;在头戴式设备被配置为处于第二状态之后,头戴式设备暂停播放第一视频。
在一种可能的实现方式中,方法还包括:在头戴式设备被配置为处于第一状态或第二状态之前,头戴式设备播放第一音频;在头戴式设备被配置为处于第一状态之后,头戴式设备继续播放第一音频;在头戴式设备被配置为处于第二状态之后,头戴式设备暂停播放第一音频。
在一种可能的实现方式中,头戴式设备还包括接近传感器,接近传感器包括电容传感器、惯性测量单元;头戴式设备通过第一扬声器发送第一超声波之前,方法还包括:头戴式设备通过接近传感器在检测到用户靠近头戴式设备的操作后,通知第一扬声器发送第一超声波。
第三方面,本申请实施例提供了一种计算机存储介质,该存储介质中存储有计算机程序, 该计算机程序包括可执行指令,该可执行指令当被处理器执行时使该处理器执行如第二方面所提供的佩戴检测方法对应的操作。
第四方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在头戴式设备上运行时,使得头戴式设备执行如第二方面的实现方式。
附图说明
图1为本申请实施例提供的一种电子设备100的结构示意图;
图2为本申请实施例提供的电子设备100的硬件结构示意图;
图3为本申请实施例提供的一种电子设备100的佩戴状态示意图;
图4为本申请实施例提供的一种佩戴检测方法的流程示意图;
图5为本申请实施例提供的超声波信号的波形示意图;
图6为本申请实施例提供的一种电子设备100确定检测信号的流程示意图;
图7为本申请实施例提供的前缀信号示意图;
图8A为本申请实施例提供的一种检测信号的时域示意图;
图8B为本申请实施例提供的一种检测信号的频域示意图;
图8C为本申请实施例提供的一种声波信号的频率-振幅示意图;
图9为本申请实施例提供的麦克风和扬声器的分布示意图;
图10为本申请实施例提供的另一种电子设备100的佩戴状态示意图;
图11为本申请实施例提供的一种佩戴检测方法的流程示意图;
图12为本申请实施例提供的麦克风和扬声器的分布示意图;
图13为本申请实施例提供的麦克风和扬声器的分布示意图;
图14为本申请实施例提供的一种应用场景示意图。
具体实施方式
虽然本申请的描述将结合一些实施例一起介绍,但这并不代表此申请的特征仅限于该实施方式。恰恰相反,结合实施方式作为申请介绍的目的是为了覆盖基于本申请的权利要求而有可能延伸出的其它选择或改造。为了提供对本申请的深度了解,以下描述中将包含许多具体的细节。本申请也可以不使用这些细节实施。此外,为了避免混乱或模糊本申请的重点,有些具体细节将在描述中被省略。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为暗示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,另外,在本申请实施例的描述中,“多个”是指两个或多于两个。
在本说明书中描述的参考“一个实施例”或“一些实施例”等意味着在本申请的一个或 多个实施例中包括结合该实施例描述的特定特征、结构或特点。由此,在本说明书中的不同之处出现的语句“在一个实施例中”、“在一些实施例中”、“在其他一些实施例中”、“在另外一些实施例中”等不是必然都参考相同的实施例,而是意味着“一个或多个但不是所有的实施例”,除非是以其他方式另外特别强调。术语“包括”、“包含”、“具有”及它们的变形都意味着“包括但不限于”,除非是以其他方式另外特别强调。
本申请提供了一种佩戴检测方法,应用于包括麦克风、处理器和扬声器的头戴式设备上。该头戴式设备可以通过扬声器发送第一超声波,通过麦克风接收第二超声波,第二超声波为麦克风接收的第一超声波的至少一部分。当第一超声波的振幅和第二超声波的振幅之间的差值为第一值时,头戴式设备被配置为处于第一状态。当第一超声波的振幅和第二超声波的振幅之间的差值为第二值时,头戴式设备被配置为处于第二状态。其中,第一值与第二值不同。
在本申请实施例中,第一状态可以为已佩戴状态,第二状态可以为非佩戴状态(又称为未佩戴状态)。头戴式设备处于第一状态的耗电量大于头戴式设备处于第二状态的耗电量。
在一些实施例中,头戴式设备的工作模式可以包括第一模式和第二模式,第一模式和第二模式不同。处于第一模式的头戴式设备的耗电量大于处于第二模式的头戴式设备的耗电量。例如,当头戴式设备处于第二模式时,和第一模式相比,头戴式设备可以停止后台程序的运行(包括后台程序的刷新、下载等),暂停播放音频,减弱播放音频的音量,断开和其他电子设备之间的通信连接等等。当头戴式设备被配置为第一状态时,工作在第一模式。当头戴式设备被配置为第二状态时,工作在第二模式。
这样,基于超声波检测用户是否佩戴头戴式设备,可以使用头戴式设备现有的器件进行佩戴检测,节约成本,并且由于头戴式设备没有增加其他用于佩戴检测的元器件,不会增加头戴式设备的重量,减少头戴式设备对用户颈椎的压力。由于超声波不能被人耳听到且不会对人体造成伤害,可以在用户没有察觉的情况下无感检测头戴式设备的佩戴情况。同时,应用了本申请提供的佩戴检测方法的头戴式设备可以更好地基于应用场景,更加智能地自动化控制头戴式设备的工作模式。也就是说,头戴式设备可以基于检测得到的头戴式设备的佩戴情况,在第一模式和第二模式中自动切换,降低头戴式设备的耗能,例如,头戴式设备在已佩戴状态时,播放音频文件,当头戴式设备检测出头戴式设备从已佩戴状态切换为未佩戴状态时,可以自动暂停播放该音频文件。并且,当头戴式设备检测出头戴式设备从未佩戴状态再次切换为已佩戴状态时,可以自动继续播放该音频文件。这样,既可以避免头戴式设备在未佩戴状态时继续播放该音频文件,减少头戴式设备的耗电量,还可以在头戴式设备再次切换为已佩戴状态时,继续播放音频文件中用户还未收听的部分,提升用户体验。再例如,头戴式设备在已佩戴状态时,显示应用的界面,当头戴式设备检测出头戴式设备从已佩戴状态切换为未佩戴状态时,可以息屏(又称为灭屏,熄屏)。并且,当头戴式设备检测出头戴式设备从未佩戴状态再次切换为已佩戴状态时,可以点亮屏幕,继续显示息屏前显示的内容。
本申请实施例涉及的头戴式设备可以为包括扬声器、麦克风和处理器的眼镜,例如,智能眼睛,佩戴在用户头部,除了具备普通眼镜具备的光学矫正、调节可视光线或装饰等功能,还可以具备通信功能。例如,该头戴式设备可以和其他电子设备(例如,手机、电脑等)建立通信连接,该通信连接可以包括有线连接和无线连接。比如,无线连接可以为无线保真(wireless fidelity,Wi-Fi)连接、蓝牙(bluetooth)连接等近距离传输技术。有线连接可以为通用串行总线(universal serial bus,USB)连接、高清多媒体接口(high definition multimedia  interface,HDMI)连接等。本实施例对通信连接的类型不作限制。该头戴式设备可以通过这些通信连接方式和其他电子设备进行数据传输。例如,当头戴式设备和通讯设备之间建立有通信连接时,当该通讯设备和其他通讯设备通电话时,可以通过头戴式设备接听通话。再例如,头戴式设备可以插入移动运营商提供的芯片(例如,用户标识(subscriberidentitymodule,SIM)卡),通过该芯片接听、拨打电话等等。
不限于智能眼镜,本申请实施例涉及的头戴式设备还可以是其他头戴式设备,例如可以为具有增强现实(augmentedreality,AR)、虚拟现实(virtualreality,VR)或混合现实(mixedreality,MR)等技术的头戴式显示设备,或智能头盔,或头罩式(头戴式)耳机等等,本申请实施例对此不作限制。
在本申请实施例中,头戴式设备可以基于佩戴检测方法,在检测到用户未佩戴该头戴式设备时,暂停/停止头戴式设备正在执行的任务,例如,当用户在通话时,检测到用户摘下头戴式设备,暂停通话等等。
参见图1,图1示出了一种电子设备100的结构示意图。在本实施例中,电子设备100示例为包括有麦克风和扬声器的眼镜。
如图1所示,电子设备100可以包括眼镜本体101和设于眼镜本体上的麦克风106、扬声器107和处理器(未示出)等等。
其中,眼镜本体101可以包括眼镜腿102、眼镜框103、显示装置104和鼻托105。显示装置104嵌于眼镜框103中。
眼镜腿102用于支撑用户将电子设备100佩戴在头部。常见的,眼镜框103包括两个眼镜框,眼镜腿102包括两个镜腿,两个镜腿分别设于两个眼镜框的后方位置,鼻托105设于两个眼镜框中间。用户佩戴电子设备100时,两个镜腿分别架于用户的两个耳朵上,鼻托105架于用户的鼻子上。
显示装置104用于用户观看真实世界物体和/或虚拟画面。显示装置104可以是透明镜片或其他颜色的镜片,可以是带有光学矫正功能的眼镜镜片,可以是具备可调节滤光功能的镜片,可以是墨镜或其他具有装饰效果的镜片。显示装置104还可以是显示屏或投影装置,可以产生光学信号,并将光学信号映射到用户眼睛中。本实施例对显示装置104的类型不作限制。在一些实施例中,可能不存在显示装置104,即眼镜本体101只包括眼镜腿102、眼镜框103和鼻托105。
在一些实施例中,当该头戴式设备为AR眼镜时,显示装置104既包括眼镜镜片,又包括显示屏或投影装置。在另一些实施例中,当该头戴式设备为VR眼镜时,显示装置104为显示屏。
麦克风106设于眼镜本体101上,例如,麦克风106可以设于眼镜腿102上,或者鼻托105上。麦克风106用于采集声音信号,比如用户的语音信息。电子设备100可以通过麦克风106采集用户语音信息,并解析生成对应的控制指令。或者,电子设备100可以通过麦克风106采集用户语音信息,并将其发送给其他电子设备,进行语音交流。
扬声器107设于眼镜本体101上,例如,扬声器107可以设于眼镜腿102上。扬声器107可以用于播放音频。
处理器(未示出)可以用于解析信号或生成指令,以及处理数据、协调调度进程等。
在本申请实施例中,扬声器107可以用于播放超声波,麦克风106可以用于采集该超声波,并将采集结果发送至处理器。处理器可以基于麦克风106采集的超声波信号,判断用户是否佩戴电子设备100。
可以理解的是,以上描述的电子设备100的结构仅为示例说明,并不对本申请其他实施例构成限制。
图2为本申请实施例提供的电子设备100的硬件结构示意图。
图2是以电子设备100是智能眼镜为例进行说明。本申请实施例对电子设备100的具体类型不作任何限制。当电子设备100为其他电子设备,如VR/AR/MR眼镜、头罩式耳机等可头戴式设备时,可以增加或减少部分硬件结构。
电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,接口130,充电管理模块140,电源管理模块141,电池142,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,传感器模块180,马达191,指示器192,摄像头193,显示装置194,SIM卡接口196等。其中,传感器模块180可以包括压力传感器180A,触摸传感器180B,惯性测量单元180C等。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110通常用于控制电子设备100的整体操作,可以包括一个或多个处理单元。例如:处理器110可以包括中央处理器(central processing unit,CPU),应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),视频处理单元(video processing unit,VPU),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor  interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口,串行外设接口(serial peripheral interface,SPI)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器110可以包含多组I2C总线。处理器110可以通过不同的I2C总线接口分别耦合触摸传感器180B,充电器,闪光灯,摄像头193等。例如:处理器110可以通过I2C接口耦合触摸传感器180B,使处理器110与触摸传感器180B通过I2C总线接口通信,实现电子设备100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器110可以包含多组I2S总线。处理器110可以通过I2S总线与音频模块170耦合,实现处理器110与音频模块170之间的通信。在一些实施例中,音频模块170可以通过I2S接口向无线通信模块160传递音频信号。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块170与无线通信模块160可以通过PCM总线接口耦合。在一些实施例中,音频模块170也可以通过PCM接口向无线通信模块160传递音频信号。I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。
在一些实施例中,UART接口通常被用于连接处理器110与无线通信模块160。例如:处理器110通过UART接口与无线通信模块160中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块170可以通过UART接口向无线通信模块160传递音频信号,实现播放音频的功能。
MIPI接口可以被用于连接处理器110与显示装置194,摄像头193等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器110和摄像头193通过CSI接口通信,实现电子设备100的拍摄功能。处理器110和显示装置194通过DSI接口通信,实现电子设备100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器110与摄像头193,显示装置194,无线通信模块160,音频模块170,传感器模块180等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为电子设备100充电,也可以用于电子设备100与外围设备之间传输数据。该接口还可以用于连接其他电子设备,例如手机、PC、智能电视等。USB接口可以是USB3.0,用于兼容高速显示接口(display port,DP)信号传输,可以传输视音频高速数据。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可 以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,显示装置194,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在其他一些实施例中,电源管理模块141也可以设置于处理器110中。在另一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100可以包含无线通信功能,比如,电子设备100可以从其它电子设备(比如手机或云端服务器)接收语音信息并播放。无线通信功能可以通过天线(未示出),移动通信模块150或无线通信模块160,调制解调处理器(未示出)以及基带处理器(未示出)等实现。
天线用于发射和接收电磁波信号。电子设备100中可以包含多个天线,每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括第二代(2th generation,2G)网络/第三代(3th generation,3G)网络/第四代(4th generation,4G)网络/第五代(5th generation,5G)网络等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示装置194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如Wi-Fi网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线转为电磁波辐射出去。在一些实施例中,无线通信模块160可以设于图1所示的眼镜本体内,用于传输通信信号,包括接收、发送通信信号,如语音信息、控制信令等。电子设备100可以通过无线通信模块160与其他电子设备,如手机、计算机等建立通信连接。
在一些实施例中,电子设备100的天线和移动通信模块150、无线通信模块160耦合, 使得电子设备100可以通过无线通信技术与网络以及其他设备通信。该无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备100可以通过GPU,显示装置194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示装置194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
在一些实施例中,显示装置194用于用户观看真实世界物体或虚拟画面。
显示装置194可以是透明镜片或其他颜色的镜片,可以是带有光学矫正功能的眼镜镜片,可以是具备可调节滤光功能的镜片,可以是墨镜或其他具有装饰效果的镜片。
显示装置194还可以是显示屏或投影装置,可以产生光学信号,并将光学信号映射到用户眼睛中,用于显示图像,视频等。其中,显示装置194可以用于呈现一个或多个虚拟对象,从而使得电子设备100为用户提供虚拟现实场景。
显示装置194呈现虚拟对象的方式可包括以下一种或多种:
1.在一些实施例中,显示装置194可以包括显示屏,显示屏可包括显示面板。显示面板可以用于显示实体对象和/或虚拟对象,从而为用户呈现立体的虚拟环境。用户可以从显示面板上看到该虚拟对象,体验虚拟现实场景。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
2.在一些实施例中,显示装置194可包括用于将光学信号(例如光束)直接投射到用户视网膜上的光学投影装置。显示装置194可以通过反射镜、透射镜或光波导等中的一种或几种光学器件,将实像素图像显示转化为近眼投影的虚拟图像显示,用户可以通过该光学装置投射出的光学信号直接看到虚拟对象,感受到立体的虚拟环境,实现虚拟的交互体验,或实现虚拟与现实相结合的交互体验。在一个示例中,该光学装置可以是微型投影仪等等。
电子设备100可以包括1个或N个显示装置194,N为大于1的正整数。在一些实施例中,电子设备中显示装置194的数量可以是两个,分别对应用户的两只眼睛。这两个显示装置上显示的内容可以独立显示。这两个显示装置上可以显示有视差的图像来提高图像的立体感。在一些可能的实施例中,电子设备中显示装置194的数量也可以是一个,用户的两只眼睛观看同一个图像。
本实施例对显示装置194的类型不作限制。在一些实施例中,也可以没有显示装置194,用户使用电子设备100提供的其他功能,不包括显示功能。比如有些用户出于装饰目的,佩戴的智能眼镜没有镜片,但是仍有接收/播放音频信号等其他功能。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体 (complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。
在一些实施例中,摄像头193可以和红外设备(如红外发射器)配合使用来检测用户的眼部动作,例如眼球注视方向、眨眼操作、注视操作等等,从而实现眼球追踪(eye tracking)。
在一些实施例中,电子设备100可以不包括摄像头193。
在一些实施例中,电子设备100还可以包括眼动跟踪模组,眼动跟踪模组可以用于跟踪人眼的运动,进而确定人眼的注视点。如,可以通过图像处理技术,定位瞳孔位置,获取瞳孔中心坐标,进而计算人的注视点。
内部存储器121可以用于存储计算机可执行程序代码,该可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据等)等。
在本申请一些实施例中,内部存储器121可以用于存储一个或多个应用的应用程序,该应用程序包括指令。当该应用程序被处理器110执行时,使得所述电子设备100生成用于呈现给用户的内容。示例性的,该应用可以包括用于管理电子设备100的应用,如游戏应用、会议应用、视频应用、桌面应用或其他应用等等。
此外,内部存储器121还可以包括高速随机存取存储器,非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,以及应用处理器等实现音频功能。例如播放音频,采集声音信号等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块170还可以用于对音频信号编码和解码。在一些实施例中,音频模块170可以设置于处理器110中,或将音频模块170的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备100可以通过扬声器170A收听音乐,或收听免提通话。扬声器170A可以用于播放频率处于20Hz-20000Hz范围内的人耳可以听见的声波信号,又称为可听声波信号。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备100接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风170C发声,将声音信号输入到麦克风170C。电子设备100可以设置至少一个麦克风170C。在另一些实施例中,电子设备100可以设置两个麦克风170C,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备100还可以设置三个,四个或更多麦克风170C,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
在本申请的一些实施例中,扬声器170A可以用于发送超声波,其中,超声波为频率超过20000Hz的声波,超声波无法被人耳听到,且不会对人体造成伤害。需要说明的是,扬声器170A既可以播放可听声波信号,也可以播放超声波信号。这样,由于超声波信号和可听声波信号的频率范围不同,不会相互影响,可以通过复用扬声器170A,播放可听声波信号和 超声波信号,减少佩戴检测所需的元器件。麦克风170C可以用于接收扬声器170A发送的超声波。音频模块170和/或处理器110可以将麦克风170C接收到的超声波信号进行计算(例如,傅里叶变换),得到接收到的超声波的振幅,并基于该振幅确定出用户是否佩戴电子设备100。例如,在扬声器170A和麦克风170C分别处于眼镜腿102的两个镜腿上的情况下,当用户佩戴电子设备100时,由于用户头部对扬声器170A发送的超声波的遮挡,麦克风170C接收到的超声波的振幅减小。当用户未佩戴电子设备100时,扬声器170A和麦克风170C之间没有障碍物的遮挡,麦克风170C接收到的超声波的振幅的衰减极小,可以忽略不计。这样,电子设备100可以基于扬声器170A发送的超声波信号的振幅和麦克风170C采集的超声波信号的振幅,确定出用户是否佩戴电子设备100。
马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于电子设备100不同区域的触摸操作,马达191也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,通知等。
电子设备100还可以包括其他输入输出接口,可以通过合适的输入输出接口将其他装置连接到电子设备100。组件例如可以包括音频/视频插孔,数据连接器等。
在一些实施例中,电子设备100还可以包括一个或多个按键,这些按键可以控制电子设备,为用户提供访问电子设备100上的功能。按键的形式可以是按钮、开关、刻度盘等机械式案件,也可以是触摸或近触摸式传感设备(如触摸传感器)。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。按键可以包括开机键,音量键等。
电子设备100上装备有一个或多个传感器,包括但不限于压力传感器180A,触摸传感器180B,惯性测量单元(inertialmeasurementunit,IMU)180C,骨传导传感器等。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。压力传感器180A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。当有触摸操作作用于电子设备100,电子设备100根据压力传感器180A检测所述触摸操作强度。电子设备100也可以根据压力传感器180A的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于压力传感器180A时,执行暂停音频的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于压力传感器180A时,执行关闭音频的指令。在一些实施例中,作用于相同触摸位置,但不同触摸操作时间长度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作时间长度小于第一时间阈值的触摸操作作用于压力传感器180A时,执行确认的指令。当有触摸操作时间长度大于或等于第一时间阈值的触摸操作作用于压力传感器180A时,执行开机/关机的指令。
触摸传感器180B,也称“触控器件”。触摸传感器180B用于检测作用于其上或附近的触摸操作。触摸传感器180B可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。电子设备100可以通过显示装置194提供与触摸操作相关的视觉输出。电子设备100也可以将触摸操作对应的指令发送给建立通信连接的其他电子设备。
惯性测量单元180C。IMU是用来检测和测量加速度与旋转运动的传感器,可以包括加速 度计、角速度计(或称陀螺仪)等。加速度计可检测电子设备100在各个方向上(一般为三轴)加速度的大小。当电子设备100静止时可检测出重力的大小及方向。还可以用于识别电子设备100的姿态,应用于体感游戏场景,横竖屏切换,计步器等应用。陀螺仪可以用于确定电子设备100的运动姿态。在一些实施例中,可以通过陀螺仪确定电子设备100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪还可以用于导航,体感游戏场景,相机防抖等。例如,电子设备100可以根据IMU等来跟踪用户头部的移动。
在本申请的一些实施例中,惯性测量单元180C可以用于检测电子设备100是否移动。例如,电子设备100可以根据IMU等检测用户佩戴电子设备100的操作。电子设备100可以在通过IMU等传感器检测到该佩戴操作时,通过扬声器170A发送超声波,通过麦克风170C接收超声波,并基于接收到的超声波进行佩戴检测,即判断用户是否佩戴电子设备100。
在本申请的一些实施例中,传感器模块180还可以包括电容传感器。电容传感器可以用于将被检测的非电学量转换为电学量。例如,电容传感器可以设置于眼镜腿102的内侧,当电容传感器检测到用户靠近或接触该电容传感器时,电容值发生变化。电子设备100可以基于电容传感器检测用户佩戴电子设备100的操作。电子设备100可以在通过电容传感器检测到该佩戴操作时,通过扬声器170A发送超声波,通过麦克风170C接收超声波,并基于接收到的超声波进行佩戴检测。
在一些实施例中,传感器模块180还可以包括骨传导传感器,骨传导传感器可以获取振动信号。例如,骨传导传感器可以获取人体声部振动骨块的振动信号。骨传导传感器可以设置于电子设备100中,音频模块170可以基于所述骨传导传感器获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。骨传导传感器也可以作为音频播放器件,用于向用户输出声音。当音频播放器件为骨传导传感器时,眼镜腿102的两个镜腿可以设有抵持部,骨传导传感器可以设置于该抵持部位置处。当用户佩戴电子设备100时,抵持部抵持耳朵前侧颅骨,进而产生振动使得声波经由颅骨和骨迷路传导至内耳。抵持部的位置直接贴近颅骨,可以减少振动损耗,使得用户更加清晰地听取音频。
SIM卡接口196用于连接SIM卡。SIM卡可以通过插入SIM卡接口196,或从SIM卡接口196拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口196可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口196可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口196也可以兼容不同类型的SIM卡。SIM卡接口196也可以兼容外部存储卡。电子设备100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备100中,不能和电子设备100分离。
接下来结合应用场景介绍本申请实施例提供的一种佩戴检测方法。
在一些应用场景中,电子设备100的麦克风和扬声器位于电子设备100的眼镜本体上的不同部件上。电子设备100开启后,电子设备100的扬声器可以发送超声波,电子设备100的麦克风可以接收该扬声器发送的超声波。由于已佩戴状态的电子设备100接收的超声波的振幅和未佩戴状态的电子设备100接收的超声波的振幅不同,电子设备100可以基于发送的超声波的振幅,判断用户是否佩戴电子设备100。
示例性的,电子设备100的麦克风和扬声器可以分别处于图1所示的眼镜腿102的两个眼镜腿上面向用户的一侧,例如,麦克风可以位于眼镜腿102的左侧眼镜腿,扬声器可以位 于眼镜腿102的右侧眼镜腿。当电子设备100处于未佩戴状态时,如图3中的(a)所示,电子设备100的麦克风和扬声器处于不同侧位置,电子设备100的扬声器发送的超声波没有被阻挡,电子设备100的麦克风可以接收到大部分扬声器发送的超声波,此时,麦克风接收的超声波的振幅可以为A1。当电子设备100处于佩戴状态时,如图3中的(b)所示,电子设备100的麦克风和扬声器处于同侧位置,电子设备100的扬声器发送的超声波被用户的头部阻挡,即,扬声器发送的部分的超声波被用户的头部反射,无法被麦克风接收,电子设备100的麦克风只能接收到小部分扬声器发送的超声波,此时,麦克风接收的超声波的振幅可以为A2。其中,A1大于A2。
因此,电子设备100的麦克风和扬声器处于不同侧位置时,电子设备100处于佩戴状态时的麦克风接收到的电子设备100的扬声器发送的超声波的振幅比电子设备100处于未佩戴状态时麦克风接收到的超声波的振幅小。这样,电子设备100可以基于超声波信号的振幅确定出用户是否佩戴电子设备100。
接下来介绍本申请实施例中提供的一种佩戴检测方法的流程示意图。
在一种可能的实现方式中,电子设备100的麦克风和扬声器分别位于电子设备100的不同部件上。当电子设备100处于已佩戴状态时,电子设备100的麦克风和扬声器之间被遮挡,当电子设备100处于未佩戴状态时,电子设备100的麦克风和扬声器之间无遮挡。电子设备100可以通过扬声器发送超声波,通过麦克风接收超声波,并基于接收的超声波,确定出电子设备100为已佩戴状态或者未佩戴状态。电子设备100处于已佩戴状态时,可以在第一模式下工作。电子设备100处于未佩戴状态时,可以在第二模式下工作。这样,使用电子设备100现有的扬声器和麦克风进行佩戴检测,节约制造成本,并且,由于电子设备100没有增加其他用于佩戴检测的元器件,不会增加电子设备100的重量,减少电子设备100对用户颈椎的压力。由于超声波不能被人耳听到且不会对人体造成伤害,可以在用户没有察觉的情况下无感检测电子设备100的佩戴状态。同时,电子设备100可以更好地基于应用场景,更加智能地自动化控制电子设备100的工作模式。在一些实施例中,在已佩戴状态时,电子设备100可以在第一模式下工作,当电子设备100工作在第一模式时,电子设备100可以正常播放音频,和其他电子设备建立用于交换数据的通信连接等,便于用户使用。在未佩戴状态时,电子设备100可以在第二模式下工作,当电子设备100工作在第二模式时,电子设备100可以电子设备100可以暂停播放音频,断开和其他电子设备之间的通信连接等,降低电子设备100的耗电量。
示例性的,如图4所示,该显示方法包括如下步骤:
S401.电子设备100启动。
在一些实施例中,当电子设备100接收到用户开启电子设备100的输入(例如,针对开机按键的输入)后,响应于该输入,电子设备100启动。电子设备100启动后,可以执行步骤S402。
需要说明的是,不限于电子设备100接收用户开启电子设备100的输入时,电子设备100启动。当电子设备100从待机模式或休眠模式切换至工作模式时,也可以理解为电子设备100启动。例如,电子设备100检测电子设备100的眼镜腿从折叠形态(即,电子设备100的两个眼镜腿互相贴合)切换至展开形态(即,电子设备100的两个眼镜腿相互分离,如图1所示),电子设备100启动。其中,电子设备100可以基于IMU等传感器检测眼镜腿的运动,确定出眼镜腿切换至展开形态。再例如,电子设备100可以在通过IMU等传感器检测到电子设备100的位置发生改变时,电子设备100启动。
在一些实施例中,电子设备100的眼睛本体靠近用户的一侧设置有接近传感器(例如,接近光传感器,电容传感器,红外传感器,IMU传感器等)。当电子设备100通过接近传感器确定出用户靠近电子设备100时,电子设备100启动。
在一些实施例中,电子设备100和充电仓连接,电子设备100置于充电仓的内部,当电子设备100脱离充电仓时,电子设备100启动。
S402.电子设备100的扬声器发送超声波。
电子设备100的扬声器可以发送超声波信号。其中,电子设备100发送的超声波信号可以称为检测信号,目标超声波信号等等。电子设备100可以通过以下几种方式,得到检测信号。
1.电子设备100中存储有M种检测信号,其中,M为正整数。电子设备100可以发送该M种检测信号中的一种检测信号。
其中,该M种检测信号的波形不同。其中,波形不同可以理解为频率、占空比等中的一种或多种不同。具体的,当检测信号只包括一种频率的波形时,两种检测信号的频率不同即为两种检测信号的频率不同。例如,一种检测信号的频率为f1,另一种检测信号的频率为f2,f1和f2不同,两种检测信号不同。当检测信号包括多种频率的波形时,两种检测信号的频率不同可以理解为两种检测信号的频率组成和/或排列不同,例如,第一种检测信号由频率为f1的波形和频率为f2的波形组成。第二种检测信号由频率为f2的波形和频率为f1的波形组成。第三种检测信号由频率为f1的波形和频率为f3的波形组成。第一种检测信号和第二种检测信号都包括频率为f1的波形和频率为f2的波形,但这两种检测信号的排列顺序不同,该两种检测信号不同。第一种检测信号和第三种检测信号包括的波形的组成不同,其中,第一种检测信号包括频率为f2的波形,不包括频率为f3的波形,第三种检测信号包括频率为f3的波形,不包括频率为f2的波形,这两种检测信号的组成不同,该两种检测信号不同。
其中,占空比为一个周期的波形中振幅为零的波形和一个周期的波形的比例。其中,占空比可以由百分数或分数表示,取值范围在0到1之间。
具体的,当检测信号只包括一种占空比的波形时,两种检测信号的占空比不同即为两种检测信号的占空比的值不同。例如,一种检测信号的占空比为w1,另一种检测信号的占空比为w2,w1和w2不同,两种检测信号不同。当检测信号包括多种占空比的波形时,两种检测信号的占空比不同可以理解为两种检测信号的占空比组成和/或排列不同。
在一些实施例中,两种波形不同还包括振幅为0的波形在一个周期内的位置不同。具体的,当两种检测信号的占空比都为25%时,第一种检测信号的振幅为0的波形处在一个周期的第一个T/4时间内,第二种检测信号的振幅为0的波形处在一个周期的最后一个T/4时间内,两种检测信号不同。
在另一些实施例中,为了便于区别两种检测信号。振幅为0的波形始终出现在一个周期的波形的开始或末尾。
可选的,两种检测信号不同为两种检测信号的振幅不同。
示例性的,电子设备100可以以Cm,m∈{1,2,…,M}标识该M种检测信号。例如,该M种检测信号的波形图可以参见如图5所示的波形图示例。其中,检测信号c1的占空比为0%,周期为T1。检测信号c2的占空比为0%,周期为T2。检测信号cm-1的占空比为25%,周期为T1。检测信号cm的占空比为25%,周期为T2。其中,检测信号c1和检测信号c2的周期(即,频率)不同,检测信号c1和检测信号cm-1的占空比不同,检测信号c1和检测信号cm的占空比和周期都不同。也就是说,超声波集合Cm中的任意两段超声波的振幅和/或频率不 同。
在一些实施例中,为了便于区分检测信号,M种检测信号中所有检测信号的振幅、占空比相同,频率不同。
在一些实施例中,电子设备100存储有M种检测信号的频率和振幅,电子设备100可以基于频率和振幅得到扬声器播放的超声波信号。例如,电子设备100可以以列表的形式存储超声波信号,例如,检测信号c1可以以{c1,21000,100}表示,其中,c1为该超声波信号的标识。21000为超声波信号的频率,该频率值的单位可以为Hz。100为超声波信号的振幅,该振幅值的单位可以为分贝(decibel,dB)。这样,电子设备100可以基于该列表确定出超声波信号的频率和振幅,即,确定出超声波信号的波形。同理,电子设备100存储有M种检测信号的频率、占空比和振幅,电子设备100可以基于频率、占空比和振幅得到扬声器播放的超声波信号。
在另一些实施例中,电子设备100存储有M种检测信号的波形对应的数字音频信息。电子设备100可以基于存储的数字音频信息,得到数字音频信号对应的模拟音频信号,即,超声波信号。
其中,电子设备100在发送M种检测信号中的一种检测信号时,可以随机选取M种检测信号中的某一种段检测信号进行发送,例如,电子设备100可以发送Cm中的Ci,其中,i大于1,且小于等于m。或者,电子设备100可以按照存储M种检测信号的顺序,依次发送M种检测信号中的某一种检测信号。例如,电子设备100可以在第一次执行步骤S402时,发送检测信号c1,在第二次执行步骤S402时,发送检测信号c2,以此类推。
进一步的,为了避免其他电子设备的干扰,电子设备100可以在发送检测信号之前,通过麦克风接收附近的声波信号,并且,从存储的检测信号中筛选出和附近的声波信号波形不同的检测信号,并发送该检测信号。
示例性的,接下来以频率为示例,介绍电子设备100确定出检测信号的具体步骤。具体的,如图6所示:
S601.电子设备100接收附近的声波信号。
电子设备100可以通过麦克风接收附近的声波信号。
S602.电子设备100判断附近的声波信号中是否包括存储的M种超声波中的N种信号,0<N<M。
电子设备100可以基于接收的声波信号的频率,判断附近的声波信号是否包括M种检测信号中的N种检测信号。具体的,电子设备100可以将附近的声波信号和M种检测信号的频率一一进行对比,确定出M种检测信号中和附近的声波信号频率相同的N种信号。其中,0<N<M。M种检测信号中和附近的声波信号频率相同的N种信号为附近的声波信号包括的N种信号。
需要说明的是,当附近的声波信号的频率和检测信号的频率的差值小于第一频率差值(例如,30Hz)时,就可以认为附近的声波信号和检测信号相同。
当电子设备100判定出附近的超声波信号中包括存储的M种检测信号中的N种信号时,可以执行步骤S603。当电子设备100判定出附近的超声波信号中不包括存储的M种检测信号中的N种信号时,可以执行步骤S604。
在一些实施例中,附近的声波信号和检测信号的频率和振幅都相同时,认为附近的声波信号和检测信号相同。其中,当附近的声波信号的频率和检测信号的频率的差值小于第一频率差值时,附近的声波信号的频率和检测信号的频率相同。当附近的声波信号的振幅和检测 信号的振幅的差值小于预设振幅差值(例如,10dB)时,附近的声波信号的振幅和检测信号的振幅相同。
S603.电子设备100发送M种检测信号中除了检测到的N种检测信号以外的任意一种检测信号。
电子设备100可以排除M种检测信号中和附近的声波信号相同的N种检测信号,并选取剩余的检测信号中任一种检测信号进行发送。
S604.电子设备100发送M种检测信号中任意一种检测信号。
电子设备100可以随机发送检测信号中任一种检测信号。
可选的,电子设备100可以获取附近的声波信号的频率,发送M种检测信号中频率和附近的声波信号不同的检测信号。
在一些实施例中,M种检测信号的振幅相同,频率不同。
2.电子设备100可以随机生成检测信号。
在一些实施例中,电子设备100可以随机生成检测信号的频率和振幅的值,再基于频率值和振幅值生成对应的检测信号。其中,检测信号的频率的值处于指定频率范围(例如,20000Hz-24000Hz)内,检测信号的振幅的值处于指定振幅范围(例如,80dB-120dB)内。
进一步的,为了避免其他电子设备的干扰,电子设备100可以在生成检测信号之前,通过麦克风获取附近的声波信号,并确定出附近的声波信号的频率。再生成和附近的声波信号的频率不同的检测信号。
在一种可能的实现方式中,为了更好地区分电子设备100发送的检测信号,电子设备100可以在发送检测信号前,发送前缀信号。电子设备100可以基于前缀信号确定出该前缀信号属于电子设备100发送的超声波信号,即,确定出发送的检测信号,电子设备100可以基于前缀信号和检测信号确定出用户是否佩戴电子设备100。其中,前缀信号和检测信号可以相同,也可以不同。当前缀信号和超声波信号相同时,该前缀信号和超声波信号之间可以存在一段空白时间间隔。其中,空白时间间隔的长度为固定值,例如,可以为2ms。
示例性的,图7示出了前缀信号的示例图像。如图7中的(a)所示,前缀信号和检测信号相同,前缀信号和检测信号之间存在有空白时间间隔。如图7中的(b)所示,前缀信号和检测信号不相同,前缀信号和检测信号相接。如图7中的(c)所示,前缀信号和检测信号不相同,前缀信号和检测信号之间存在有空白时间间隔。这样,电子设备100可以基于前缀信号的波形确定出该前缀信号和检测信号为用于检测用户是否佩戴电子设备100的信号,并基于确定出的前缀信号和检测信号判断用户是否佩戴电子设备100,避免附近的声波信号对检测结果的干扰。
可选的,电子设备100可以基于前缀信号确定出检测信号,再基于检测信号进行佩戴检测。
可选的,电子设备100发送的检测信号可以由多段频率不同的超声波信号拼接得到。这样,电子设备100可以通过多段频率不同的超声波信号,共同检测用户是否佩戴电子设备100,进一步确保检测结果的正确性。
在一种可能的实现方式中,电子设备100可以基于多种不同的波形组成检测信号。示例性的,当电子设备100包括两种波形,波形A和波形B时,电子设备100可以通过波形A和波形B的组合排序确定出检测信号。例如,当一段检测信号由4段波形组成时,当检测信号由波形A,波形A,波形B和波形A组成时,该检测信号可以表示为AABA。可以理解的是,当电子设备100只包括两种波形时,可以由二进制数标识该两种波形,便于理解。例如,可 以通过二进制数字0标识波形A,通过二进制数字1标识波形B,那么检测信号AABA可以表示为0010。这样,电子设备100存储的M种检测信号,即可以表示为存储有M种字符串,以及字符串中每一个字符对应的波形。
可选的,电子设备100可以只存储有N个字符和该N个字符对应的N种波形。电子设备100可以随机生成由该N个字符中任意字符组成的字符串,得到检测信号。
S403.电子设备200的麦克风接收超声波。
当电子设备100通过扬声器发送检测信号时,可以通过麦克风接收附近的声波信号,附近的声波信号包括扬声器发送的检测信号。
S404.电子设备100判断接收的超声波的振幅是否超过第一阈值。
电子设备100可以针对接收的检测信号,进行傅里叶变换处理,得到接收的检测信号的振幅值。当检测信号的振幅大于第一阈值时,电子设备100可以执行步骤S406。当检测信号的振幅小于或等于第一阈值时,电子设备100可以执行步骤S405。其中,第一阈值的值可以为发送的检测信号的振幅值和第一系数的乘积。其中,第一系数可以为大于0,且小于等于1的分数值。在一些实施例中,第一系数可以为50%至80%之间的百分数。在一些实施例中,第一阈值的值为固定值,该固定值可以电子设备100的制造厂家进行设置。
在一些实施例中,电子设备100可以在接收到附近的模拟音频信号后,将模拟音频信号转换成数字音频信号,并针对数字音频信号进行傅里叶变换处理,得到附近的音频信号的频率和振幅的对应关系。电子设备100可以基于发送的检测信号的频率,确定出接收的检测信号。再基于附近的音频信号的频率和振幅的对应关系,确定出接收的检测信号的振幅。电子设备100可以基于该振幅的值是否大于第一阈值,确定出用户是否佩戴电子设备100。在此,电子设备100接收的附近的声波信号中和发送的检测信号的频率相同的声波信号为检测信号。在一些实施例中,两个信号的频率的差值小于第一频率差值时,两个信号的频率相同。
示例性的,电子设备100发送的检测信号为x(t)时,接收的检测信号的振幅值可以表示为x[n],对x[n]进行离散傅里叶变换的公式为:
其中,X[k]是频率为k的检测信号中第n个采样点的振幅值,x[n]为检测信号中第n个采样点的振幅值,N为采样点的总数量,j为复数。
在一些实施例中,检测信号的振幅在20dB-150dB之间,例如,可以为100dB。在一些实施例中,检测信号的频率在20000Hz-40000Hz之间,例如,可以为20000Hz。
例如,当电子设备100的扬声器发送的检测信号的频率为20000Hz,振幅为100dB时,当电子设备100的麦克风和扬声器之间没有障碍物阻挡时,电子设备100的麦克风可以基于160KHz采样频率采集检测信号。其中,在0.2ms中扬声器发送的检测信号和麦克风采集的检测信号的时域图像可以如图8A所示。电子设备100在0.2ms时间内发送的检测信号的波形可以如图8A中的(a)所示。电子设备100在0.2ms的时间内针对图8A中的(a)示出的检测信号进行采样,得到32个采样点,该32个采样点组成了如图8A中(b)所示的离散波形。基于上述离散傅里叶变化的公式(1),电子设备100的麦克风得到的离散波形进行傅里叶变换的公式如下,
其中,采样点的总数量N为32。由此公式(2),可以得到检测信号的频域图像。电子设备100接收的检测信号的频域图像可以如图8B所示。根据图8B示出的检测信号的频域图像, 可以确定出电子设备100的麦克风采集的检测信号的频率为20000Hz,振幅为100dB。
接下来介绍在一些实际的应用场景中,电子设备100的麦克风采集的声波信号的图像。例如,若电子设备100发送的检测信号的频率为20000Hz,振幅为100dB,第一阈值的值为60dB。如图8C中的(a)所示,电子设备100采集的声波信号中频率为20000Hz的声波信号为检测信号。该检测信号的频率为20000Hz,振幅为100dB。电子设备100接收的检测信号的振幅大于第一阈值。电子设备100可以判定出用户未佩戴电子设备100。
需要说明的是,图8C中的(a)示出的电子设备100接收到的检测信号的振幅和电子设备100发送的检测信号的振幅的值仅为示例,由于麦克风和扬声器之间存在介质,接收的检测信号的振幅值可以小于或等于发送的检测信号的振幅值,本申请实施例对此不作限定。
如图8C中的(b)所示,电子设备100采集的声波信号中频率为20000Hz的声波信号为检测信号。该检测信号的频率为20000Hz,振幅为50dB。电子设备100接收的检测信号的振幅的值小于第一阈值。电子设备100可以判定出用户已佩戴电子设备100。
在一种可能的实现方式中,电子设备100可以基于电子设备100的电量设置发送的检测信号的振幅值以及第一阈值。例如,当电子设备100的电量较少(例如,小于20%)时,电子设备100可以降低发送的检测信号的振幅和第一阈值的值,从而减少电子设备100进行佩戴检测的耗电量,进一步节约电子设备100的功耗。
可选的,当接收的检测信号的振幅大于或等于第一阈值时,电子设备100可以判定出用户未佩戴电子设备100,可以执行步骤S406。当超声波的振幅小于第一阈值时,电子设备100可以判定出用户已佩戴电子设备100,执行步骤S405。
在一种可能的实现方式中,电子设备100可以基于发送的检测信号的振幅和接收的检测信号的振幅的差值确定出电子设备100的佩戴状态。具体的,当该发送的检测信号的振幅和接收的检测信号的振幅的差值处于第一范围时,确定出电子设备100处于未佩戴状态。当该发送的检测信号的振幅和接收的检测信号的振幅的差值不处于第一范围时,确定出电子设备100处于已佩戴状态。其中,第一范围的值可以为预设值,或者,基于发送的检测信号的振幅得到的值。示例性的,当发送的检测信号的振幅为A时,第一范围可以为[0,A*x],其中,x为大于0小于1的系数。例如,x可以为0.4。这样,当发送的检测信号的振幅为100dB时,第一范围为[0,40]。若接收的检测信号的振幅值处于60dB-100dB之间,发送的检测信号和接收的检测信号的振幅差值处于第一范围,电子设备100处于未佩戴状态。若接收的检测信号的振幅值处于0dB-59dB之间,发送的检测信号和接收的检测信号的振幅差值未处于第一范围,电子设备100处于已佩戴状态。
可选的,第一范围可以表示为[0,A*x)或(0,A*x),其中,x为大于0小于1的系数。
可选的,第一范围可以表示为[k,A*x)或(k,A*x),其中,k大于等于0且小于A*x。
在一种可能的实现方式中,电子设备100可以基于发送的检测信号的振幅和接收的检测信号的振幅的差值确定出电子设备100的佩戴状态。具体的,当该发送的检测信号的振幅和接收的检测信号的振幅的差值为第一值时,确定出电子设备100处于已佩戴状态。当该发送的检测信号的振幅和接收的检测信号的振幅的差值为第二值时,确定出电子设备100处于非佩戴状态。
在一些实施例中,第一值的取值范围为[0,A*x]或(0,A*x],第二值的取值范围为(A*x,100]或(A*x,100)。其中,A为发送的检测信号的振幅值,x为大于0且小于1的系数。例如,当发送的检测信号的振幅值为100dB,且,x为0.4时,第一值的取值范围可以为0dB-40dB,第二值的取值范围可以为40dB-100dB。
可选的,第一值的取值范围为[k,A*x]或(k,A*x],第二值的取值范围为(A*x,p]或(A*x,p)。其中,k大于等于0且小于A*x,p大于A*x且小于等于100。
在一种可能的实现方式中,电子设备100可以基于接收的检测信号的振幅和发送的检测信号的振幅的百分比确定出电子设备100的佩戴状态。具体的,当该接收的检测信号的振幅和发送的检测信号的振幅的百分比为第三值时,确定出电子设备100处于已佩戴状态。当该接收的检测信号的振幅和发送的检测信号的振幅的差值为第四值时,确定出电子设备100处于非佩戴状态。
在一些实施例中,第三值的取值范围为[0%,y%]或(0%,y%],第四值的取值范围为(y%,100%]或(y%,100%)。其中,y大于0且小于100。例如,当y为60时,第三值的取值范围可以为0%-60%,第二值的取值范围可以为60%-100%。
可选的,第三值的取值范围为[a%,y%]或(a%,y%],第四值的取值范围为(y%,b%]或(y%,b%)。其中,a大于等于0且小于y,b大于y且小于等于100。
S405.电子设备100处于已佩戴状态,电子设备100在第一模式下工作。
电子设备100在步骤S402-步骤S404中基于接收的检测信号的振幅确定出电子设备100处于已佩戴状态。电子设备100在第一模式下工作。由于相比于第二模式,处于第一模式的电子设备100耗电量更大,可以更加快速高效地执行用户的指令,便于用户使用。
电子设备100执行步骤S405后,可以继续执行步骤S402。
可选的,电子设备100执行步骤S405后,可以相隔预设空闲时间(例如,20ms),再执行步骤S402。这样,可以减少电子设备100发送/接收超声波的耗能。
需要说明的是,当电子设备100在第一模式下工作时,由于电子设备100的扬声器可以播放频率范围在20Hz-20000Hz之间的可听声波信号,用户可以听见该频率范围的可听声波信号。
在一些实施例中,为了用户可以收听麦克风发送的可听声波信号,电子设备100可以周期轮流发送检测信号和可听声波信号。具体的,电子设备100可以在第一个播放周期发送检测信号,在第二个播放周期发送可听声波信号,电子设备100可以在第三个播放周期发送检测信号,在第四个播放周期发送可听声波信号,……,等等。其中,该多个播放周期的时长可以相同,也可以不同。
例如,电子设备100可以在发送超声波信号的过程中,将每1s划分为30个33ms。其中,每一个33ms内,电子设备100可以播放23ms可听声波信号,再播放10ms检测信号。这样,由于听觉暂留现象,用户可以将间隔播放的可听声波信号误会为连续播放的。需要说明的是,当前划分方法仅为本申请实施例提供的示例,不应对检测信号和可听声波信号的发送时间构成限定。
在一些实施例中,电子设备100可以设置X个扬声器,X为大于等于2的整数。电子设备100可以每隔预设空闲时间(例如,20ms),通过该X个扬声器中的Y个扬声器发送检测信号,用于佩戴检测,Y大于0且小于M。同时,电子设备100的X个扬声器中除了该发送检测信号的扬声器以外的所有扬声器可以继续发送可听声波信号。需要说明的是,在预设空闲时间内,电子设备100可以通过所有的麦克风发送可听声波信号。
可选的,电子设备100通过该Y个扬声器发送检测信号时,该Y个扬声器可以在第一时间段发送检测信号,在第二时间段发送可听声波信号,以此类推。
S406.电子设备100处于未佩戴状态,电子设备100在第二模式下工作。
电子设备100在步骤S402-步骤S404中基于接收的检测信号的振幅确定出电子设备100 处于未佩戴状态,电子设备100在第二模式下工作。由于相比于第一模式,处于第二模式的电子设备100可以停止后台程序的运行(包括后台程序的刷新、下载等),暂停播放音频或减弱播放音频的音量,调低显示器显示亮度等等。降低电子设备100的耗电量。
电子设备100执行步骤S406后,可以继续执行步骤S402。
可选的,电子设备100执行步骤S406后,可以相隔预设空闲时间(例如,20ms),再执行步骤S402。这样,可以减少电子设备100发送/接收超声波的耗能。
可选的,电子设备100可以包括超声波发送传感器和超声波接收传感器,电子设备100可以通过超声波发送传感器发送超声波信号,通过超声波接收传感器接收超声波信号。电子设备100再基于超声波信号确定出佩戴检测的结果。
需要说明的是,麦克风和扬声器的位置不限于图3所示,只要电子设备100的麦克风和扬声器处于不同部件上,并且电子设备100处于已佩戴状态时接收的检测信号的振幅小于电子设备100处于未佩戴状态时接收的检测信号的振幅,电子设备100就可以通过上述图4所示的佩戴检测方法确定出电子设备100的佩戴状态。
示例性的,电子设备100的麦克风或扬声器的位置可以位于左侧眼镜腿、右侧眼镜腿、左侧眼睛架、右侧眼睛架、鼻托等不同部件上。如图9中的(a)所示,电子设备100的麦克风位于鼻托,扬声器位于眼镜腿。可以理解的是,不限于图9中的(a)所示的位置,电子设备的麦克风和扬声器可以位于其他位置,例如,电子设备100的扬声器位于鼻托,麦克风位于眼镜腿,等等,本申请实施例对此不作限定。
示例性的,电子设备100的扬声器的数量可以不止1个。如图9中的(b)所示,电子设备100的麦克风位于鼻托,扬声器A位于左侧眼镜腿,扬声器B位于右侧眼镜腿。可以理解的是,不限于图9中的(b)所示的部件,电子设备的麦克风和扬声器可以位于其他部件上,例如,电子设备100的扬声器A可以位于鼻托,麦克风可以位于左侧眼镜腿,扬声器B可以位于右侧眼镜腿。
需要说明的是,当电子设备100包括多个扬声器时,电子设备100可以基于多个扬声器发送的检测信号,判断用户是否佩戴电子设备100。若电子设备100接收到的该多个扬声器发送的检测信号中存在至少一段检测信号的振幅大于第一阈值,则,可以判定出用户未佩戴电子设备100。也就是说,只有电子设备100接收的所有扬声器发送的检测信号的振幅小于或等于第一阈值,电子设备100才能判定出用户佩戴该电子设备100。这样,当电子设备100的部分扬声器被误触遮挡时,电子设备100也可以确定出用户是否佩戴电子设备100。可选的,若电子设备100接收到的该多个扬声器发送的检测信号中存在至少一段检测信号的振幅大于或等于第一阈值,可以判定出用户未佩戴电子设备100。若电子设备100接收的所有扬声器发送的检测信号的振幅小于第一阈值,可以判定出用户佩戴该电子设备100。
可选的,当电子设备100包括多个(2个或2个以上)扬声器时,为了避免扬声器之间的声波影响(例如,叠加或抵消),每个扬声器发送的超声波的频率不同。或者,多个扬声器发送的超声波之间存在相位差,且相位差为超声波的一个周期。
可选的,电子设备100可以控制多个扬声器发送的超声波的相位,形成超声波波束,使得得到的超声波波束的方向朝向麦克风。这样,波束的能量可以集中在麦克风所处的方向上,可以通过麦克风接收的超声波的振幅得到更准确的佩戴检测结果。
示例性的,电子设备100的麦克风的数量可以不止1个。如图9中的(c)所示,电子设备100的麦克风A位于鼻托,麦克风B位于右侧眼镜腿,扬声器位于左侧眼镜腿。可以理解 的是,不限于图9中的(c)所示的位置,电子设备的麦克风和扬声器可以位于其他位置,例如,电子设备100的扬声器位于鼻托,麦克风A位于左侧眼镜腿,麦克风B位于右侧眼镜腿。
需要说明的是,当电子设备100包括多个麦克风时,电子设备100可以基于多个麦克风接收的检测信号,判断用户是否佩戴电子设备100。若电子设备100的多个麦克风接收的检测信号中存在至少一段检测信号的振幅大于第一阈值,则,可以判定出用户未佩戴电子设备100。也就是说,只有电子设备100所有麦克风接收的检测信号的振幅小于或等于第一阈值,电子设备100才能判定出用户佩戴该电子设备100。这样,当电子设备100的部分麦克风被误触遮挡时,电子设备100也可以确定出用户是否佩戴电子设备100。可选的,若电子设备100的多个麦克风接收到的检测信号中存在至少一段检测信号的振幅大于或等于第一阈值,可以判定出用户未佩戴电子设备100。若电子设备100的所有麦克风接收的检测信号的振幅小于第一阈值,可以判定出用户佩戴该电子设备100。
示例性的,电子设备100的麦克风以及扬声器的数量可以不止1个。如图9中的(d)所示,电子设备100的麦克风A位于鼻托,麦克风B位于右侧眼镜腿,扬声器A和扬声器B位于左侧眼镜腿。可以理解的是,不限于图9中的(d)所示的位置,电子设备的麦克风和扬声器可以位于其他位置,例如,电子设备100的麦克风A和麦克风B位于鼻托,扬声器A位于左侧眼镜腿,扬声器B位于右侧眼镜腿,本申请实施例对此不作限定。
在一些应用场景中,电子设备100的麦克风和扬声器位于电子设备100的眼睛本体上的相同部件上。电子设备100开启后,电子设备100的扬声器可以发送超声波,电子设备100的麦克风可以接收该扬声器发送的超声波。电子设备100可以基于发送的超声波的振幅和接收的超声波的振幅,判断电子设备100的佩戴状态。
示例性的,电子设备100的麦克风和扬声器可以处于图1所示的眼镜腿102的同一侧镜腿上(例如,右侧眼镜腿处)。当电子设备100处于未佩戴状态时,如图10中的(a)所示,电子设备100的麦克风和扬声器处于相同部件上,电子设备100的扬声器发送的超声波没有被阻挡,大部分的超声波向四周发送,电子设备100的麦克风只能接收到少部分扬声器发送的超声波,此时,麦克风接收的超声波的振幅可以为A3。当电子设备100处于佩戴状态时,如图10中的(b)所示,电子设备100的麦克风和扬声器处于相同部件,电子设备100的扬声器发送的超声波被用户的头部阻挡,大部分的超声波反射到麦克风处,电子设备100的麦克风能接收到大部分扬声器发送的超声波,此时,麦克风接收的超声波的振幅可以为A4。其中,A3小于A4。需要说明的是,不限于右侧眼镜腿处,麦克风和扬声器还可以一同位于左侧眼镜腿处,或,鼻托处,或眼镜架处等等,本申请实施例对此不做限定。
因此,电子设备100的麦克风和扬声器处于相同部件时,电子设备100处于佩戴状态时的麦克风接收到的电子设备100的扬声器发送的超声波的振幅比电子设备100处于未佩戴状态时麦克风接收到的超声波的振幅大。这样,电子设备100可以基于佩戴状态和未佩戴状态接收到的超声波的振幅,判定出用户是否佩戴电子设备100。
例如,电子设备100可以在判定出麦克风接收的超声波的振幅大于第一阈值时,确定出电子设备100处于已佩戴状态。在判定出麦克风接收的超声波的振幅小于或等于第一阈值时,确定出电子设备100处于未佩戴状态。或者,电子设备100可以在判定出麦克风接收的超声波的振幅大于或等于第一阈值时,确定出电子设备100处于已佩戴状态。在判定出麦克风接收的超声波的振幅小于第一阈值时,确定出电子设备100处于未佩戴状态。
示例性的,如图11所示,该显示方法包括如下步骤:
S1101.电子设备100启动。
电子设备100启动后,可以执行步骤S1102。
S1102.电子设备100的扬声器发送超声波。
电子设备100的扬声器可以发送超声波信号。其中,电子设备100发送的超声波信号可以称为检测信号,目标超声波信号等等。
S1103.电子设备200的麦克风接收超声波。
当电子设备100通过扬声器发送检测信号时,可以通过麦克风接收附近的声波信号,附近的声波信号包括扬声器发送的检测信号。
具体的,步骤S1101-步骤S1103的详细描述可以参见图4所示实施例,在此不再赘述。
S1104.电子设备100判断接收的超声波的振幅是否超过第一阈值。
电子设备100可以针对接收的检测信号,进行傅里叶变换处理,得到接收的检测信号的振幅值。当检测信号的振幅大于第一阈值时,电子设备100可以执行步骤S1105。当检测信号的振幅小于或等于第一阈值时,电子设备100可以执行步骤S1106。其中,第一阈值的值可以为发送的检测信号的振幅值和第一系数的乘积。其中,第一系数可以为大于0,且小于等于1的分数值。在一些实施例中,第一系数可以为50%至80%之间的任一百分数。
可选的,当接收的检测信号的振幅大于或等于第一阈值时,电子设备100可以判定出用户已佩戴电子设备100,可以执行步骤S1105。当超声波的振幅小于第一阈值时,电子设备100可以判定出用户未佩戴电子设备100,执行步骤S1106。
具体的,电子设备100执行佩戴检测操作的具体描述可以参见图4所示实施例,在此不再赘述。
在一种可能的实现方式中,电子设备100可以基于发送的检测信号的振幅和接收的检测信号的振幅的差值确定出电子设备100的佩戴状态。具体的,当电子设备100确定出该发送的检测信号的振幅和接收的检测信号的振幅的差值处于第一范围时,确定出电子设备100处于已佩戴状态。当电子设备100确定出该发送的检测信号的振幅和接收的检测信号的振幅的差值不处于第一范围时,确定出电子设备100处于未佩戴状态。其中,第一范围的值可以为预设值,或者,基于发送的检测信号的振幅得到。示例性的,当发送的检测信号的振幅为A时,第一范围可以为[0,A*x],其中,x为大于0小于1的系数,例如,x可以为0.4。例如,当发送的检测信号的振幅为100dB时,第一范围为[0,40]。若接收的检测信号的振幅值处于60dB-100dB之间,发送的检测信号和接收的检测信号的振幅差值处于第一范围,电子设备100处于已佩戴状态。若接收的检测信号的振幅值处于0dB-59dB之间,发送的检测信号和接收的检测信号的振幅差值未处于第一范围,电子设备100处于未佩戴状态。
可选的,第一范围可以表示为[0,A*x)或(0,A*x),其中,x为大于0小于1的系数。
可选的,第一范围可以表示为[k,A*x)或(k,A*x),其中,k大于等于0且小于A*x。
在一种可能的实现方式中,电子设备100可以基于发送的检测信号的振幅和接收的检测信号的振幅的差值确定出电子设备100的佩戴状态。具体的,当该发送的检测信号的振幅和接收的检测信号的振幅的差值为第一值时,确定出电子设备100处于非佩戴状态。当该发送的检测信号的振幅和接收的检测信号的振幅的差值为第二值时,确定出电子设备100处于已佩戴状态。
在一些实施例中,第一值的取值范围为[0,A*x]或(0,A*x],第二值的取值范围为(A*x,100]或(A*x,100)。其中,A为发送的检测信号的振幅值,x为大于0且小于1的系数。例如,当发送的检测信号的振幅值为100dB,且,x为0.4时,第一值的取值范围可以为0dB-40dB, 第二值的取值范围可以为40dB-100dB。
可选的,第一值的取值范围为[k,A*x]或(k,A*x],第二值的取值范围为(A*x,p]或(A*x,p)。其中,k大于等于0且小于A*x,p大于A*x且小于等于100。
在一种可能的实现方式中,电子设备100可以基于接收的检测信号的振幅和发送的检测信号的振幅的百分比确定出电子设备100的佩戴状态。具体的,当该接收的检测信号的振幅和发送的检测信号的振幅的百分比为第三值时,确定出电子设备100处于非佩戴状态。当该接收的检测信号的振幅和发送的检测信号的振幅的差值为第四值时,确定出电子设备100处于已佩戴状态。
在一些实施例中,第三值的取值范围为[0%,y%]或(0%,y%],第四值的取值范围为(y%,100%]或(y%,100%)。其中,y大于0且小于100。例如,当y为60时,第三值的取值范围可以为0%-60%,第四值的取值范围可以为60%-100%。
可选的,第三值的取值范围为[a%,y%]或(a%,y%],第四值的取值范围为(y%,b%]或(y%,b%)。其中,a大于等于0且小于y,b大于y且小于等于100。
S1105.电子设备100处于已佩戴状态,电子设备100在第一模式下工作。
电子设备100在步骤S1102-步骤S1104中基于接收的检测信号的振幅确定出电子设备100处于已佩戴状态。电子设备100在第一模式下工作。由于相比于第二模式,处于第一模式的电子设备100可以更加快速高效地执行用户的指令,便于用户使用。
电子设备100执行步骤S1105后,可以继续执行步骤S1102。
可选的,电子设备100执行步骤S1105后,可以相隔预设空闲时间(例如,20ms),再执行步骤S1102。这样,可以减少电子设备100发送/接收超声波的耗能。
其中,电子设备100在第一模式下时,发送检测信号的描述可以参见图4所示实施例,在此不再赘述。
S1106.电子设备100处于未佩戴状态,电子设备100在第二模式下工作。
电子设备100在步骤S1102-步骤S1104中基于接收的检测信号的振幅确定出电子设备100处于未佩戴状态,电子设备100在第二模式下工作。由于相比于第一模式,处于第二模式的电子设备100可以停止后台程序的运行(包括后台程序的刷新、下载等),暂停播放音频或减弱播放音频的音量,调低显示器显示亮度等等。降低电子设备100的耗电量。
电子设备100执行步骤S1106后,可以继续执行步骤S1102。
可选的,电子设备100执行步骤S1106后,可以相隔预设空闲时间(例如,20ms),再执行步骤S1102。这样,可以减少电子设备100发送/接收超声波的耗能。
在一些实施例中,电子设备100可以在基于检测信号判断电子设备100的佩戴状态后,再基于其他传感器(例如,接近传感器,IMU等等)进一步确认电子设备100的佩戴状态。电子设备100可以在基于检测信号和其他传感器确定出用户佩戴电子设备100时,在第一模式下工作。电子设备100可以在基于检测信号或其他传感器确定出用户未佩戴电子设备100时,在第二模式下工作。
可选的,电子设备100可以包括超声波收发传感器,电子设备100可以通过超声波收发传感器发送或接收超声波信号。电子设备100再基于超声波信号确定出佩戴检测的结果。
在一些实施例中,当电子设备100的工作模式为第二模式时,电子设备100为了减少功耗,可以断开和电子设备200之间的通信连接。当电子设备100的工作模式为第一模式时, 为了电子设备100可以和电子设备200进行数据交换,电子设备100可以和电子设备200建立通信连接。因此,当电子设备100的工作模式从第二模式切换为第一模式时,可以和电子设备200建立通信连接。当电子设备100的工作模式从第一模式切换为第二模式时,若电子设备100和电子设备200之间建立有通信连接,电子设备100可以断开和电子设备200建立的通信连接。其中,电子设备200可以为平板电脑、手机、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、车载设备、智能家居设备和/或智慧城市设备等等。
在一些实施例中,电子设备100处于未佩戴状态时,可以开启第二模式,电子设备100可以通过检测信号检测到电子设备100处于已佩戴状态时,将第二模式切换为第一模式。其中,由于电子设备100处于第二模式的耗电量小于电子设备100处于第一模式的耗电量。这样,电子设备100可以在未佩戴状态时,在第二模式下工作,减少电子设备100的耗电量,电子设备100可以在已佩戴状态时,在第一模式下工作,便于用户使用。
需要说明的是,不限于超声波,电子设备100也可以通过其他信号实现上述佩戴检测方法。其中,其他信号可以包括但不限于次声波,红外线,可见光等等。
需要说明的是,麦克风和扬声器的位置不限于图10所示,只要电子设备100的麦克风和扬声器处于相同部件上,并且电子设备100处于已佩戴状态时接收的检测信号的振幅大于电子设备100处于未佩戴状态时接收的检测信号的振幅,电子设备100就可以通过上述图11所示的佩戴检测方法确定出电子设备100的佩戴状态。
示例性的,电子设备100的麦克风或扬声器的位置可以位于左侧眼镜腿、右侧眼镜腿、左侧眼睛架、右侧眼睛架、鼻托等相同部件上。如图12中的(a)所示,电子设备100的麦克风和扬声器位于鼻托。可以理解的是,不限于图12中的(a)所示的位置,电子设备的麦克风和扬声器可以位于其他位置,例如,右侧眼镜腿,等等,本申请实施例对此不作限定。
示例性的,电子设备100的扬声器的数量可以不止1个。如图12中的(b)所示,电子设备100的麦克风,扬声器A,扬声器B都位于右侧眼镜腿。可以理解的是,不限于图12中的(b)所示的部件,电子设备的麦克风和扬声器可以同时位于其他部件上,例如,左侧眼镜腿,鼻托等。
需要说明的是,当电子设备100包括多个扬声器时,电子设备100可以基于多个扬声器发送的检测信号,判断用户是否佩戴电子设备100。若电子设备100接收到的该多个扬声器发送的检测信号中存在至少一段检测信号的振幅小于第一阈值,则,可以判定出用户未佩戴电子设备100。也就是说,只有电子设备100接收的所有扬声器发送的检测信号的振幅大于或等于第一阈值,电子设备100才能判定出用户佩戴该电子设备100。这样,当电子设备100的部分扬声器被误触遮挡时,电子设备100也可以确定出用户是否佩戴电子设备100。可选的,若电子设备100接收到的该多个扬声器发送的检测信号中存在至少一段检测信号的振幅小于或等于第一阈值,可以判定出用户未佩戴电子设备100。若电子设备100接收的所有扬声器发送的检测信号的振幅大于第一阈值,可以判定出用户佩戴该电子设备100。
可选的,当电子设备100包括多个(2个或2个以上)扬声器时,为了避免扬声器之间的声波影响(例如,叠加或抵消),每个扬声器发送的超声波的频率不同。或者,多个扬声器发送的超声波之间存在相位差,且相位差为超声波的一个周期。
示例性的,电子设备100的麦克风的数量可以不止1个。如图12中的(c)所示,电子设备100的麦克风A,麦克风B和扬声器都位于右侧眼镜腿。可以理解的是,不限于图12中的(c)所示的位置,电子设备的麦克风和扬声器可以位于其他位置,例如,右侧眼镜腿。
需要说明的是,当电子设备100包括多个麦克风时,电子设备100可以基于多个麦克风接收的检测信号,判断用户是否佩戴电子设备100。若电子设备100的多个麦克风接收的检测信号中存在至少一段检测信号的振幅小于第一阈值,则,可以判定出用户未佩戴电子设备100。也就是说,只有电子设备100所有麦克风接收的检测信号的振幅大于或等于第一阈值,电子设备100才能判定出用户佩戴该电子设备100。这样,当电子设备100的部分麦克风被误触遮挡时,电子设备100也可以确定出用户是否佩戴电子设备100。可选的,若电子设备100的多个麦克风接收到的检测信号中存在至少一段检测信号的振幅小于或等于第一阈值,可以判定出用户未佩戴电子设备100。若电子设备100的所有麦克风接收的检测信号的振幅大于第一阈值,可以判定出用户佩戴该电子设备100。
示例性的,电子设备100的麦克风以及扬声器的数量可以不止1个。如图12中的(d)所示,电子设备100的麦克风A,麦克风B,扬声器A和扬声器B都位于右侧眼镜腿。可以理解的是,不限于图12中的(d)所示的位置,电子设备的麦克风和扬声器可以位于其他位置,例如,左侧眼镜腿等等,本申请实施例对此不作限定。这样,电子设备100的麦克风B可以基于接收的扬声器A发送的检测信号和扬声器B发送的检测侧信号分别进行佩戴检测,电子设备100的麦克风A也可以基于接收的扬声器A发送的检测信号和扬声器B发送的检测侧信号分别进行佩戴检测,让佩戴检测的结果更加准确。
在一种可能的实现方式中,电子设备100既包括位于相同部件的麦克风和扬声器,也包括位于不同部件的麦克风和扬声器。电子设备100可以结合上述图4和图9提供的佩戴检测方法,共同判断电子设备100的佩戴状态。这样,电子设备100可以更加准确地判断电子设备100的佩戴状态。
示例性的,电子设备100的扬声器的数量可以不止1个。如图13中的(a)所示,电子设备100的麦克风,扬声器B位于右侧眼镜腿,扬声器A位于左侧眼镜腿。
需要说明的是,当电子设备100包括多个扬声器时,电子设备100可以基于多个扬声器发送的检测信号,判断用户是否佩戴电子设备100。其中,电子设备100可以基于麦克风接收的位于同一部件上的扬声器(例如扬声器B)发送的检测信号,通过图11所示的检测方法判断电子设备100的佩戴状态。电子设备100可以基于麦克风接收的位于不同部件上的扬声器(例如扬声器A)发送的检测信号,通过图4所示的检测方法判断电子设备100的佩戴状态。需要说明的是,只有基于所有扬声器发送的检测信号,都判定出电子设备100处于已佩戴状态时,才可以确定电子设备100处于已佩戴状态。
可选的,当电子设备100包括多个(2个或2个以上)扬声器时,为了避免扬声器之间的声波影响(例如,叠加或抵消),每个扬声器发送的超声波的频率不同。或者,多个扬声器发送的超声波之间存在相位差,且相位差为超声波的一个周期。
示例性的,电子设备100的麦克风的数量可以不止1个。如图13中的(b)所示,电子设备100的麦克风B和扬声器都位于右侧眼镜腿,麦克风A位于左侧眼镜腿。
需要说明的是,当电子设备100包括多个麦克风时,电子设备100可以基于多个麦克风接收的检测信号,判断用户是否佩戴电子设备100。其中,电子设备100可以基于麦克风(例如麦克风B)接收的和该麦克风位于同一部件上的扬声器发送的检测信号,通过图11所示的检测方法判断电子设备100的佩戴状态。电子设备100可以基于麦克风(例如麦克风A)接收的和该麦克风位于不同部件上的扬声器发送的检测信号,通过图4所示的检测方法判断电子设备100的佩戴状态。需要说明的是,只有基于所有麦克风接收的检测信号,都判定出电 子设备100处于已佩戴状态时,才可以确定电子设备100处于已佩戴状态。
示例性的,电子设备100的麦克风以及扬声器的数量可以不止1个。如图13中的(c)所示,电子设备100的麦克风A,扬声器A都位于左侧眼镜腿,麦克风B和扬声器B都位于右侧眼镜腿。这样,电子设备100的麦克风B可以基于接收的扬声器A发送的检测信号和扬声器B发送的检测侧信号分别进行佩戴检测,电子设备100的麦克风A也可以基于接收的扬声器A发送的检测信号和扬声器B发送的检测侧信号分别进行佩戴检测,让佩戴检测的结果更加准确。
需要说明的是,当电子设备100包括多个麦克风时,电子设备100可以基于多个麦克风接收的检测信号,判断用户是否佩戴电子设备100。其中,电子设备100可以基于麦克风(例如麦克风B)接收的和该麦克风位于同一部件上的扬声器(例如,扬声器B)发送的检测信号,通过图11所示的检测方法判断电子设备100的佩戴状态,同时基于该麦克风接收的位于不同部件上的扬声器(例如,扬声器A)发送的检测信号,通过图4所示的检测方法判断电子设备100的佩戴状态。同时,电子设备100可以基于麦克风(例如麦克风A)接收的和该麦克风位于同一部件上的扬声器(例如,扬声器A)发送的检测信号,通过图11所示的检测方法判断电子设备100的佩戴状态,同时基于该麦克风接收的位于不同部件上的扬声器(例如,扬声器B)发送的检测信号,通过图4所示的检测方法判断电子设备100的佩戴状态。需要说明的是,只有基于所有麦克风接收的检测信号,都判定出电子设备100处于已佩戴状态时,才可以确定电子设备100处于已佩戴状态。
可以理解的是,不限于图13中所示的位置,电子设备的麦克风和扬声器可以位于其他部件,本申请实施例对此不作限定。
在一些实施例中,当电子设备100为包括麦克风、扬声器和处理器的头盔时,电子设备100的麦克风和扬声器位于头盔的不同部分。电子设备100可以通过图4示出的佩戴检测方法,检测头盔的佩戴状况。电子设备100的麦克风和扬声器位于头盔的相同部分。电子设备100可以通过图11示出的佩戴检测方法,检测头盔的佩戴状况。例如,头盔可以分为五个部分,分别为,前部、后部、顶部、左部、右部。其中,前部可以为和用户的额头接触的部分区域,后部可以为和用户的后脑勺接触的部分区域,顶部可以为和用户的头顶接触的部分区域,左部可以为和用户的左耳接触的部分区域,右部可以为和用户的诱饵接触的部分区域。头盔的部分分布仅为示例,本申请对此不作限定。
在一些应用场景中,电子设备100的显示装置为显示屏或投影装置。例如,电子设备100可以为AR设备。电子设备100可以通过扬声器发送检测信号,通过麦克风接收扬声器发送的检测信号。电子设备100可以基于接收的检测信号确定出电子设备100处于已佩戴状态时,执行已佩戴状态对应的操作,例如,设置工作模式为第一模式。电子设备100可以基于接收的检测信号确定出电子设备100处于未佩戴状态时,执行已佩戴状态对应的操作,例如,设置工作模式为第二模式。这样,基于该佩戴检测方法,可以减少电子设备100的功耗。
具体的,当电子设备100的麦克风和扬声器位于不同部件时,电子设备100执行佩戴检测的步骤可以参见如图4所示实施例。当电子设备100的麦克风和扬声器位于相同部件时,电子设备100执行佩戴检测的步骤可以参见如图11所示实施例。
在一些实施例中,当电子设备100的工作模式为第二模式时,电子设备100为了减少功耗,可以暂停播放视频图像。当电子设备100的工作模式为第一模式时,为了用户可以观看 视频,电子设备100可以播放视频图像。因此,当电子设备100的工作模式从第二模式切换为第一模式时,可以继续播放暂停的视频文件。当电子设备100的工作模式从第一模式切换为第二模式时,电子设备100可以暂停播放视频文件。
示例性的,如图14中的(a)所示,电子设备100处于已佩戴状态,电子设备100播放视频文件。其中,电子设备100可以在左侧的显示装置上显示视频页面1201,在右侧的显示装置上显示视频页面1202。视频页面1201和视频页面1202中包括有视频图像。
当电子设备100基于本申请实施例提供的佩戴检测方法确定出电子设备100从已佩戴状态切换至未佩戴状态时,电子设备100可以暂停播放视频文件。如图14中的(b)所示,电子设备100处于未佩戴状态,电子设备100暂停播放视频文件。其中,电子设备100可以在左侧显示装置上的视频页面1201上显示视频暂停图标1211,在右侧显示装置上的视频页面1202上显示视频暂停图标1212。其中,视频页面1201和视频页面1202中包括有视频图像。视频暂停图标1211和视频暂停图标1212可以用于提示用户电子设备100已经暂停播放该视频文件。这样,电子设备100可以在从已佩戴状态切换至未佩戴状态时,暂停播放视频文件,不需要用户手动暂停播放视频,减少用户操作,便于用户使用。
同理,电子设备100处于未佩戴状态时,可以显示如图14中(b)所示的视频暂停图标1211和视频暂停图标1212。当电子设备100基于本申请实施例提供的佩戴检测方法确定出电子设备100从未佩戴状态切换至已佩戴状态时,电子设备100可以继续播放视频文件。电子设备100可以取消显示视频暂停图标1211和视频暂停图标1212,继续播放视频文件,如图14中的(a)所示。这样,电子设备100可以在从未佩戴状态切换至已佩戴状态时,继续播放暂停的视频文件,不需要用户手动播放视频,减少用户操作,便于用户使用。
在一些实施例中,电子设备100包括显示屏或投影仪时,当电子设备100的工作模式为第二模式时,为了减少功耗,电子设备100可以在第二模式时息屏(又称为灭屏,熄屏)。当电子设备100的工作模式为第一模式时,为了用户可以使用观看电子设备100的显示内容。电子设备100可以亮屏。因此,当电子设备100的工作模式从第二模式切换为第一模式时,可以亮屏,显示应用的界面。当电子设备100的工作模式从第一模式切换为第二模式时,电子设备100可以息屏,节约功耗。
上述实施例所描述的实现方式仅为示例性说明,并不对本申请其他实施例构成任何限制。具体内部实现方式可能根据电子设备类型不同、所搭载的操作系统的不同、所使用的程序、所调用的接口的不同而不同,本申请实施例不作任何限制,可以实现本申请实施例所描述的特征功能即可。
以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (29)

  1. 一种头戴式设备,其特征在于,包括:第一扬声器,麦克风,处理器;
    所述第一扬声器,用于发送第一超声波;
    所述麦克风,用于接收第二超声波,所述第二超声波为所述麦克风接收的所述第一超声波的至少一部分;
    当所述第一超声波的振幅和所述第二超声波的振幅之间的差值为第一值时,所述头戴式设备被配置为处于第一状态;
    当所述第一超声波的振幅和所述第二超声波的振幅之间的差值为第二值时,所述头戴式设备被配置为处于第二状态,其中,所述第一值与所述第二值不同。
  2. 根据权利要求1所述的头戴式设备,其特征在于,所述第一状态为已佩戴状态,所述第二状态为非佩戴状态。
  3. 根据权利要求2所述的头戴式设备,其特征在于,所述头戴式设备处于所述第一状态的耗电量大于所述头戴式设备处于所述第二状态的耗电量。
  4. 根据权利要求1至3中任一所述的头戴式设备,其特征在于,所述麦克风位于第一部件,所述第一扬声器位于第二部件,所述第一部件和所述第二部件不同,所述第一值大于所述第二值。
  5. 根据权利要求4所述的头戴式设备,其特征在于,所述第一值的取值范围为40分贝-100分贝,和/或,所述第二值的取值范围为0分贝-40分贝。
  6. 根据权利要求1至3中任一所述的头戴式设备,其特征在于,所述麦克风和所述第一扬声器都位于第一部件,所述第一值小于所述第二值。
  7. 根据权利要求6所述的头戴式设备,其特征在于,所述第一值的取值范围为0分贝-40分贝,和/或,所述第二值的取值范围为40分贝-100分贝。
  8. 根据权利要求1至7中任一项所述的头戴式设备,其特征在于,所述麦克风,还用于在所述第一扬声器发送第一超声波之前,接收非所述第一扬声器发送的第三超声波;所述第一超声波被配置为与所述第三超声波不同。
  9. 根据权利要求8所述的头戴式设备,其特征在于,所述第一超声波被配置为与所述第三超声波不同,包括:
    所述第一超声波被配置为与所述第三超声波的频率不同和/或占空比不同。
  10. 根据权利要求9所述的头戴式设备,其特征在于,所述第一超声波与所述第三超声波的频率不同,包括:
    所述第一超声波的频率和所述第三超声波的频率的差值大于第一频率差值。
  11. 根据权利要求1至10中任一项所述的头戴式设备,其特征在于,所述头戴式设备还包括第二扬声器;
    所述第二扬声器,用于发送可听声波信号,所述可听声波信号的频率和所述第一超声波的频率不同。
  12. 根据权利要求11所述的头戴式设备,其特征在于,所述第一扬声器,具体用于在第一时间段内发送所述第一超声波;
    所述第二扬声器,具体用于在第一时间段内发送所述可听声波信号。
  13. 根据权利要求1至12中任一项所述的头戴式设备,其特征在于,所述第一扬声器,具体用于在第一时间段内发送所述第一超声波,在第二时间段内发送可听声波信号,所述可听声波信号的频率和所述第一超声波的频率不同。
  14. 根据权利要求13所述的头戴式设备,其特征在于,所述第一扬声器,还用于在第三时间段内发送所述第一超声波,在第四时间段内发送所述可听声波信号,所述第二时间段在所述第一时间段之后,所述第三时间段在所述第二时间段之后,所述第四时间段在所述第三时间段之后。
  15. 根据权利要求13或14所述的头戴式设备,其特征在于,所述第一时间段和所述第二时间段周期性间隔发送,所述第一时间段的范围包括5ms至15ms,所述第二时间段的范围包括20ms至40ms。
  16. 根据权利要求1至15中任一项所述的头戴式设备,其特征在于,所述第一扬声器,还用于在发送所述第一超声波之前,发送前缀信号,所述前缀信号用于标识所述第一超声波。
  17. 根据权利要求1至16中任一项所述的头戴式设备,其特征在于,所述头戴式设备的设备类型包括以下任意一种:智能眼镜,头罩式耳机,增强现实AR眼镜,虚拟显示VR眼镜,混合现实MR眼镜,智能头盔。
  18. 根据权利要求4所述的头戴式设备,其特征在于,所述头戴式设备为眼镜,所述第一部件为左侧眼镜腿,所述第二部件为右侧眼镜腿,或者,所述第一部件为右侧眼镜腿,所述第二部件为左侧眼镜腿,或者,所述第一部件为鼻托,所述第二部件为左侧眼镜腿,或者,所述第一部件为鼻托,所述第二部件为右侧眼镜腿。
  19. 根据权利要求4所述的头戴式设备,其特征在于,所述头戴式设备为AR眼镜,所述处理器,还用于在所述头戴式设备被配置为处于第一状态或第二状态之前,播放第一视频;
    所述处理器,还用于在所述头戴式设备被配置为处于第一状态之后,继续播放所述第一视频;
    所述处理器,还用于在所述头戴式设备被配置为处于第二状态之后,暂停播放所述第一 视频。
  20. 根据权利要求2所述的头戴式设备,其特征在于,所述处理器,还用于在所述头戴式设备被配置为处于第一状态或第二状态之前,播放第一音频;
    所述处理器,还用于在所述头戴式设备被配置为处于第一状态之后,继续播放所述第一音频;
    所述处理器,还用于在所述头戴式设备被配置为处于第二状态之后,暂停播放所述第一音频。
  21. 根据权利要求1至20中任一项所述的头戴式设备,其特征在于,所述头戴式设备还包括接近传感器,所述接近传感器包括电容传感器、惯性测量单元;
    所述接近传感器,用于在检测到用户靠近所述头戴式设备的操作后,通知所述第一扬声器发送所述第一超声波。
  22. 一种佩戴检测方法,应用于包括麦克风和第一扬声器的头戴式设备,其特征在于,所述方法包括:
    所述头戴式设备通过所述第一扬声器发送第一超声波;
    所述头戴式设备通过所述麦克风接收第二超声波,所述第二超声波为所述麦克风接收的所述第一超声波的至少一部分;
    当所述第一超声波的振幅和所述第二超声波的振幅之间的差值为第一值时,所述头戴式设备被配置为处于第一状态;
    当所述第一超声波的振幅和所述第二超声波的振幅之间的差值为第二值时,所述头戴式设备被配置为处于第二状态,其中,所述第一值与所述第二值不同。
  23. 根据权利要求22所述的佩戴检测方法,其特征在于,所述第一状态为已佩戴状态,所述第二状态为非佩戴状态。
  24. 根据权利要求23所述的佩戴检测方法,其特征在于,所述头戴式设备处于所述第一状态的耗电量大于所述头戴式设备处于第二状态的耗电量。
  25. 根据权利要求22至24中任一所述的佩戴检测方法,其特征在于,所述麦克风位于第一部件,所述第一扬声器位于第二部件,所述第一部件和所述第二部件不同,所述第一值大于所述第二值。
  26. 根据权利要求22至24中任一所述的头戴式设备,其特征在于,所述麦克风和所述第一扬声器都位于第一部件,所述第一值小于所述第二值。
  27. 根据权利要求22至26中任一项所述的头戴式设备,其特征在于,所述头戴式设备为AR眼镜,所述方法还包括:
    在所述头戴式设备被配置为处于所述第一状态或所述第二状态之前,所述头戴式设备播放第一视频;
    在所述头戴式设备被配置为处于所述第一状态之后,所述头戴式设备继续播放所述第一视频;
    在所述头戴式设备被配置为处于所述第二状态之后,所述头戴式设备暂停播放所述第一视频。
  28. 根据权利要求22至27中任一项所述的头戴式设备,其特征在于,所述方法还包括:
    在所述头戴式设备被配置为处于所述第一状态或所述第二状态之前,所述头戴式设备播放第一音频;
    在所述头戴式设备被配置为处于所述第一状态之后,所述头戴式设备继续播放所述第一音频;
    在所述头戴式设备被配置为处于所述第二状态之后,所述头戴式设备暂停播放所述第一音频。
  29. 一种计算机可读存储介质,包括指令,其特征在于,当所述指令在头戴式设备上运行时,使得所述头戴式设备执行如权利要求22至28中任一项所述的方法。
PCT/CN2023/083912 2022-03-28 2023-03-25 一种佩戴检测方法及相关装置 WO2023185698A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210310203.5 2022-03-28
CN202210310203.5A CN116860102A (zh) 2022-03-28 2022-03-28 一种佩戴检测方法及相关装置

Publications (1)

Publication Number Publication Date
WO2023185698A1 true WO2023185698A1 (zh) 2023-10-05

Family

ID=88199401

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/083912 WO2023185698A1 (zh) 2022-03-28 2023-03-25 一种佩戴检测方法及相关装置

Country Status (2)

Country Link
CN (1) CN116860102A (zh)
WO (1) WO2023185698A1 (zh)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750220A (zh) * 2011-07-20 2015-07-01 谷歌公司 确定可穿戴设备是否在使用中
CN105094326A (zh) * 2015-07-20 2015-11-25 联想(北京)有限公司 信息处理方法及电子设备
CN109460082A (zh) * 2019-01-15 2019-03-12 努比亚技术有限公司 规避外界干扰的方法、移动终端及计算机可读存储介质
CN111158169A (zh) * 2020-01-21 2020-05-15 东莞市吉声科技有限公司 一种智能眼镜及智能眼镜的控制方法
WO2021020686A1 (ko) * 2019-07-30 2021-02-04 삼성전자 주식회사 헤드셋 전자 장치 및 그와 연결되는 전자 장치
US20210088810A1 (en) * 2019-09-24 2021-03-25 Dragon Summit Group Inc. Smart glasses
CN113342406A (zh) * 2021-06-25 2021-09-03 歌尔科技有限公司 一种穿戴设备的亮屏控制方法、穿戴设备
CN113613156A (zh) * 2021-04-26 2021-11-05 深圳市冠旭电子股份有限公司 佩戴状态的检测方法、装置、头戴式耳机及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750220A (zh) * 2011-07-20 2015-07-01 谷歌公司 确定可穿戴设备是否在使用中
CN105094326A (zh) * 2015-07-20 2015-11-25 联想(北京)有限公司 信息处理方法及电子设备
CN109460082A (zh) * 2019-01-15 2019-03-12 努比亚技术有限公司 规避外界干扰的方法、移动终端及计算机可读存储介质
WO2021020686A1 (ko) * 2019-07-30 2021-02-04 삼성전자 주식회사 헤드셋 전자 장치 및 그와 연결되는 전자 장치
US20210088810A1 (en) * 2019-09-24 2021-03-25 Dragon Summit Group Inc. Smart glasses
CN111158169A (zh) * 2020-01-21 2020-05-15 东莞市吉声科技有限公司 一种智能眼镜及智能眼镜的控制方法
CN113613156A (zh) * 2021-04-26 2021-11-05 深圳市冠旭电子股份有限公司 佩戴状态的检测方法、装置、头戴式耳机及存储介质
CN113342406A (zh) * 2021-06-25 2021-09-03 歌尔科技有限公司 一种穿戴设备的亮屏控制方法、穿戴设备

Also Published As

Publication number Publication date
CN116860102A (zh) 2023-10-10

Similar Documents

Publication Publication Date Title
EP3862845B1 (en) Method for controlling display screen according to eyeball focus and head-mounted electronic equipment
WO2021213120A1 (zh) 投屏方法、装置和电子设备
WO2022002166A1 (zh) 一种耳机噪声处理方法、装置及耳机
WO2021052214A1 (zh) 一种手势交互方法、装置及终端设备
EP3846427B1 (en) Control method and electronic device
WO2020207380A1 (zh) 头戴电子设备及其控制方法
WO2021000817A1 (zh) 环境音处理方法及相关装置
WO2020056684A1 (zh) 通过转发模式连接的多tws耳机实现自动翻译的方法及装置
WO2020237617A1 (zh) 控屏方法、装置、设备及存储介质
WO2021103990A1 (zh) 显示方法、电子设备及系统
KR20180071012A (ko) 전자 장치 및 전자 장치 제어 방법
US11798234B2 (en) Interaction method in virtual reality scenario and apparatus
CN111065020B (zh) 音频数据处理的方法和装置
WO2023216930A1 (zh) 基于穿戴设备的振动反馈方法、系统、穿戴设备和电子设备
WO2022089563A1 (zh) 一种声音增强方法、耳机控制方法、装置及耳机
WO2023185698A1 (zh) 一种佩戴检测方法及相关装置
WO2024046182A1 (zh) 一种音频播放方法、系统及相关装置
CN116320880B (zh) 音频处理方法和装置
WO2023197997A1 (zh) 穿戴设备、拾音方法及装置
WO2024027259A1 (zh) 信号处理方法及装置、设备控制方法及装置
WO2021057420A1 (zh) 控制界面显示的方法及头戴式显示器
US20240045651A1 (en) Audio Output Method, Media File Recording Method, and Electronic Device
WO2022267467A1 (zh) 屏幕镜像方法、装置、设备及存储介质
CN115549715A (zh) 一种通信方法、相关电子设备及系统
CN116088741A (zh) 一种电子设备性能优化方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23778065

Country of ref document: EP

Kind code of ref document: A1