WO2023207862A1 - 确定头部姿态的方法以及装置 - Google Patents

确定头部姿态的方法以及装置 Download PDF

Info

Publication number
WO2023207862A1
WO2023207862A1 PCT/CN2023/090134 CN2023090134W WO2023207862A1 WO 2023207862 A1 WO2023207862 A1 WO 2023207862A1 CN 2023090134 W CN2023090134 W CN 2023090134W WO 2023207862 A1 WO2023207862 A1 WO 2023207862A1
Authority
WO
WIPO (PCT)
Prior art keywords
head
electronic device
user
image
posture
Prior art date
Application number
PCT/CN2023/090134
Other languages
English (en)
French (fr)
Inventor
姜永航
黄洁静
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023207862A1 publication Critical patent/WO2023207862A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of terminal technology, and in particular to a method and device for determining head posture.
  • Head wearable devices such as smart glasses and headphones generally have built-in inertial sensors that can be used to detect head posture.
  • head posture Due to differences in the heads of different people, ear heights and auricle shapes are different.
  • head-mounted devices such as smart glasses and headphones, the wearing habits are also very different, which will cause head-wearing problems.
  • the relative posture of the device and the head is different, and this gap is difficult to correct, resulting in the inability to accurately detect the posture of the head and affecting the accuracy of subsequent applications.
  • the head posture measured using the same headset under different wearing methods is often different.
  • the present application provides a method and device for determining the head posture to correct the user's head posture so that the user's head posture estimated by the electronic device is closer to the user's actual head posture.
  • a method for determining a head posture is provided, which is applied to a first electronic device.
  • the method includes: obtaining a first head posture parameter of a user; in the process of obtaining the first head posture parameter , obtain the first device posture parameter of the target electronic device, which is the second electronic device or the first electronic device; according to the first head posture parameter and the first device posture parameter, we obtain Target head posture parameter, which is the corrected head posture parameter of the user.
  • the first electronic device in the embodiment of this application may be a head-mounted device, or a mobile phone, etc.
  • the target electronic device is the second electronic device.
  • the target electronic device is the first electronic device.
  • the first electronic device corrects the user's first head posture parameter by acquiring the user's first head posture parameter and the first device posture parameter of the second electronic device, so that the user's first head posture parameter can be corrected. Corrected, the target head posture parameters that are closer to the user's real head posture are obtained. By correcting the first head posture parameter, it will not cause large errors due to differences in the user's head and habits of wearing head wearable devices, and also make subsequent applications run based on the head posture more accurate. high.
  • the first electronic device acquiring the user's first head posture parameter includes: the first electronic device acquiring the user's head image.
  • the first electronic device obtains the user's first head posture parameter based on the user's head image.
  • the target electronic device is a second electronic device
  • the user's The head image is collected by a first electronic device.
  • the first electronic device further includes a first sensor.
  • the method provided by the embodiment of the present application further includes: the first electronic device acquires the image of the first electronic device within a first time period through the first sensor.
  • the second device posture parameter, the first time period is the time period during which the first electronic device collects the user's head image.
  • the first electronic device obtains the first head posture parameter of the user based on the user's head image, including: the first electronic device obtains the initial head posture parameter based on the user's head image.
  • the first electronic device obtains the first head posture parameter based on the initial head posture parameter and the second device posture parameter.
  • the target electronic device is a second electronic device
  • the user's head image is collected by the first electronic device
  • the method provided by the embodiment of the present application may further include: first When the electronic device detects and determines the user's head posture, it collects an image of the user's head when wearing the second electronic device through the image acquisition component (such as a camera) of the first electronic device.
  • the image acquisition component such as a camera
  • the first electronic device acquires the user's head image, including: triggering the third electronic device to acquire the user's head when a trigger condition for detecting the head posture parameter is met. image, and obtaining the head image of the user collected by the third electronic device from the third electronic device.
  • the first electronic device is a mobile phone or a head-worn device
  • the first electronic device can trigger other devices other than the first electronic device to collect the user's head image.
  • the second electronic device is a head-worn device
  • the first electronic device obtains the first device posture parameter of the second electronic device, including: the first electronic device obtains the user's first posture parameter.
  • a first image the first image is a head image when the user wears the head-worn device; the first electronic device determines the first device posture parameter of the second electronic device based on the first image.
  • the first electronic device can obtain the first device posture parameter of the second electronic device through first image analysis.
  • the second electronic device is a head-worn device
  • the second electronic device has a second sensor
  • the second sensor is used to collect the first device posture parameter of the second electronic device
  • the first electronic device obtains the first device posture parameter of the second electronic device, including: the first electronic device receives the first device posture parameter from the second electronic device.
  • the method provided by the embodiment of the present application further includes: the first electronic device triggers the second electronic device to collect The first device posture parameter of the second electronic device.
  • the first electronic device can send a collection instruction to the second electronic device through a communication connection with the second electronic device.
  • the collection instruction is used to trigger the second electronic device to collect and report the posture parameters of the first device.
  • the second electronic device includes a first component and a second component
  • the first electronic device obtains the first device posture parameter of the second electronic device, including: the first electronic device obtains the first component The device posture parameters of the second component, and the device posture parameters of the second component.
  • the first electronic device determines the first device posture parameter of the second electronic device based on the device posture parameter of the first component and the device posture parameter of the second component.
  • the first electronic device obtains the device posture parameter of the first component and the device posture parameter of the second component, including: the first electronic device obtains the second image and the third image, the The second image is a head image of the user wearing the first component, and the third image is a head image of the user wearing the second component.
  • the first electronic device determines the device posture parameter of the first component based on the second image.
  • the first electronic device determines the device posture parameter of the second component based on the third image. in the first electronic device
  • the second image and the third image may be captured by an image capture device such as a mobile phone, and then sent to the head-worn device.
  • the first electronic device may capture the first image and the second image when the user wears the second electronic device (ie, the head-worn device).
  • the method provided by the embodiment of the present application further includes: the first electronic device displays a display on the display screen of the first electronic device. Display at least one of a first control and a second control, the first control is used to prompt the collection of the second image, and the second control is used to prompt the collection of the third image.
  • the first component and the second component each have a third sensor
  • the first electronic device acquires the device attitude parameter of the first component
  • the device of the second component The posture parameter includes: the first electronic device obtains the device posture parameter of the first component collected by the third sensor of the first component from the second electronic device.
  • the first electronic device obtains the device posture parameter of the second component collected by the third sensor of the second component from the second electronic device.
  • the method provided by the embodiment of the present application further includes: the first electronic device sends a first prompt information, and the first The prompt information is used to determine whether the user's head is in a standard position.
  • the first electronic device has a display screen, and the first prompt information is displayed on the display screen.
  • the method provided by the embodiment of the present application further includes: displaying on the display screen The distance between the user's current head position and the standard position.
  • an electronic device including a processor, the processor being coupled to a memory, and the processor being configured to execute a computer program or instructions stored in the memory, so that the electronic device implements the above determination. Head posture method.
  • a computer-readable storage medium stores a computer program.
  • the computer program When the computer program is run on an electronic device, it causes the electronic device to perform the above-mentioned determining head posture. Methods.
  • Figure 1 is a system for determining head posture provided by an embodiment of the present application
  • Figure 2 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • Figure 3 is a software structure block diagram of the electronic device provided by the embodiment of the present application.
  • Figure 4 is a schematic diagram of a sports health software provided by an embodiment of the present application.
  • Figure 5 is a flow chart of a method for determining head posture provided by an embodiment of the present application.
  • Figure 6 is a reference schematic diagram of the coordinate system provided by the embodiment of the present application.
  • Figure 7 is a schematic diagram of the device attitude angle of the head-worn device provided by an embodiment of the present application.
  • Figure 8 is a schematic diagram of the mobile phone shooting interface and selection interface provided by the embodiment of the present application.
  • Figure 9 is a schematic diagram of a mobile phone connection display interface provided by an embodiment of the present application.
  • Figure 10 is a schematic diagram of the posture parameters of the second device provided by the embodiment of the present application.
  • Figure 11 is a schematic diagram of the pairing interface of a Bluetooth device posture provided by an embodiment of the present application.
  • Figure 12 is a schematic diagram of the visual guidance display interface provided by the embodiment of the present application.
  • Figure 13 is a schematic diagram of a display interface prompting head adjustment provided by an embodiment of the present application.
  • Figure 14 is a schematic diagram of the selection prompt interface for the attitude angle of the devices on both sides provided by the embodiment of the present application;
  • Figure 15 is a schematic diagram of a display interface for connecting left and right Bluetooth headsets according to an embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same functions and effects.
  • the first component and the second component are only used to distinguish different components, and their sequence is not limited.
  • words such as “first” and “second” do not limit the number and execution order, and words such as “first” and “second” do not limit the number and execution order.
  • At least one refers to one or more, and “plurality” refers to two or more.
  • “And/or” describes the association of associated objects, indicating that there can be three relationships, for example, A and/or B, which can mean: A exists alone, A and B exist simultaneously, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the related objects are in an “or” relationship.
  • “At least one of the following” or similar expressions thereof refers to any combination of these items, including any combination of a single item (items) or a plurality of items (items).
  • At least one of a, b, or c can mean: a, b, c, a-b, a-c, b-c, or a-b-c, where a, b, c can be single or multiple .
  • the embodiment of the present application provides a method for determining head posture, which can be applied to any electronic device, such as mobile phones, tablets, wearable devices (for example, watches, bracelets, smart helmets, etc.), vehicle-mounted devices, smart phones, etc. Home, augmented reality (AR)/virtual reality (VR) devices, laptops, ultra-mobile personal computers (UMPC), netbooks, personal digital assistants (PDA) wait.
  • the first electronic device can obtain the first head posture parameter of the user.
  • the first electronic device obtains the first device posture parameter of the second electronic device, and then according to the first The first device posture parameters of the two electronic devices and the first head posture parameters of the user are used to obtain the target head posture parameters when the user wears the second electronic device, so as to correct the head posture parameters when the user wears the second electronic device.
  • the first electronic device is a mobile phone as an example
  • the second electronic device is a head-worn device (such as a Bluetooth headset, smart glasses, etc.).
  • the second electronic device is a head-mounted device. wearable devices. This method enhances the intelligence of electronic devices to a certain extent, helps correct users' bad usage habits, and improves user experience.
  • Figure 1 is a system for determining head posture provided by an embodiment of the present application.
  • the system includes: a first electronic device 100 and a second electronic device 200.
  • Device 200 can establish and maintain wireless connections through wireless communication technologies.
  • the first electronic device 100 may be a mobile phone, a tablet computer, a notebook computer, a wireless terminal device, etc. having a display screen or an image capture device (such as a camera).
  • the second electronic device 200 may be one or more of a head-worn device, such as smart glasses, and a headset (such as a Bluetooth headset).
  • a head-worn device such as smart glasses
  • a headset such as a Bluetooth headset
  • the above-mentioned wireless communication technology may be Bluetooth (bluetooth, BT), such as traditional Bluetooth or low-power Bluetooth (bluetooth low energy, BLE), or general 2.4G/5G band wireless communication technology, etc.
  • Bluetooth bluetooth, BT
  • BLE low-power Bluetooth
  • BLE Bluetooth low energy
  • the system may also include a third electronic device with an image acquisition function, such as an image acquisition device, used to acquire the user's head image to assist the first electronic device 100 in determining the user's first Head posture parameters.
  • a third electronic device with an image acquisition function such as an image acquisition device, used to acquire the user's head image to assist the first electronic device 100 in determining the user's first Head posture parameters.
  • the image acquisition device is used to collect images when the user wears the head-mounted device to assist the first electronic device 100 in determining the device posture parameters of the head-mounted device.
  • the second electronic device 200 is a Bluetooth headset.
  • the Bluetooth headset may be of various types, such as an earbud type, an in-ear type, etc.
  • the Bluetooth headset may include a first part and a second part respectively worn on the left and right ears of the user.
  • the first part and the second part can be connected through a connecting cable, such as a neckband Bluetooth headset; or they can be two independent parts, such as a true wireless stereo (TWS) headset.
  • TWS true wireless stereo
  • the Bluetooth headset is a headset that supports Bluetooth communication protocol.
  • the Bluetooth communication protocol can be a traditional Bluetooth protocol or a BLE low-power Bluetooth protocol; of course, it can also be other new Bluetooth protocol types launched in the future.
  • FIG. 2 shows a schematic structural diagram of an electronic device 300.
  • the electronic device 300 may include a processor 310, an external memory interface 320, an internal memory 321, a universal serial bus (USB) interface 330, a charging management module 340, a power management module 341, a battery 342, an antenna 1, an antenna 2 , mobile communication module 350, wireless communication module 360, audio module 370, receiver 370A, microphone 370B, headphone interface 370C, sensor module 380, button 390, motor 391, indicator 392, 1 to N cameras 393, 1 to N Display screen 394, and subscriber identification module (subscriber identification module, SIM) card interface 395, etc.
  • SIM subscriber identification module
  • the sensor module 380 may include a pressure sensor 380A, a fingerprint sensor 380B, a touch sensor 380C, a magnetic sensor 380D, a distance sensor 380E, a proximity light sensor 380F, an ambient light sensor 380G, an infrared sensor 380H, an ultrasonic sensor 380I, an electric field sensor 380J, and an inertia sensor. Sensor 380K etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 300 .
  • the electronic device 300 may include more or fewer components than shown in the figures, or some components may be combined, some components may be separated, or some components may be arranged differently.
  • the components illustrated may be implemented in hardware, software, or a combination of software and hardware.
  • the first electronic device 100 and the second electronic device 200 both belong to one type of electronic devices 300 .
  • the processor 310 may include one or more processing units.
  • the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) wait.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural-network processing unit
  • different processing units can be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 300 .
  • the controller can generate operation control signals based on the instruction operation code and timing signals to complete the control of fetching and executing instructions.
  • the processor 310 may also be provided with a memory for storing instructions and data.
  • the memory in processor 310 is cache memory. This memory may hold instructions or data that have been recently used or recycled by processor 310 . If the processor 310 needs to use the instructions or data again, it can be called directly from the memory. Repeated access is avoided and the waiting time of the processor 310 is reduced, thus improving the efficiency of the system.
  • processor 310 may include one or more interfaces.
  • Interfaces may include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, pulse code modulation (pulse code modulation, PCM) interface, universal asynchronous receiver and transmitter (universal asynchronous receiver/transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input/output (GPIO) interface, subscriber identity module (SIM) interface, and /or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • UART universal asynchronous receiver and transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input/output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (SDA) and a serial clock line (derail clock line, SCL).
  • processor 310 may include multiple sets of I2C buses.
  • the processor 310 can be coupled to the inertial sensor 380K, charger, flashlight, camera 393, etc. through different I2C bus interfaces.
  • the processor 310 can be coupled to the touch sensor 380K through an I2C interface, so that the processor 310 and the inertial sensor 380K communicate through the I2C bus interface to implement the touch control function of the electronic device 300 .
  • the I2S interface can be used for audio communication.
  • processor 310 may include multiple sets of I2S buses.
  • the processor 310 can be coupled with the audio module 370 through the I2S bus to implement communication between the processor 310 and the audio module 370.
  • the audio module 370 can transmit audio signals to the wireless communication module 360 through the I2S interface to implement the function of answering calls through a Bluetooth headset.
  • the audio module 370 can also transmit audio signals to the wireless communication module 360 through the PCM interface to implement the function of answering calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus can be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • a UART interface is typically used to connect the processor 310 and the wireless communication module 360 .
  • the processor 310 communicates with the Bluetooth module in the wireless communication module 360 through the UART interface to implement the Bluetooth function.
  • the audio module 370 can transmit audio signals to the wireless communication module 360 through the UART interface to implement the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 310 with peripheral devices such as the display screen 394 and the camera 393 .
  • MIPI interfaces include camera serial interface (CSI), display serial interface (DSI), etc.
  • the processor 310 and the camera 393 communicate through the CSI interface to implement the shooting function of the electronic device 300.
  • the processor 310 and the display screen 394 communicate through the DSI interface to implement the display function of the electronic device 300.
  • the GPIO interface can be configured through software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface can be used to connect the processor 310 and the camera 393 to display Screen 394, wireless communication module 360, audio module 370, sensor module 380, etc.
  • the GPIO interface can also be configured as an I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 330 is an interface that complies with the USB standard specifications. Specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 330 can be used to connect a charger to charge the electronic device 300, and can also be used to transmit data between the electronic device 300 and peripheral devices. It can also be used to connect headphones to play audio through them. This interface can also be used to connect other electronic devices, such as AR devices, etc.
  • the interface connection relationship between the modules illustrated in the embodiment of the present application is only a schematic illustration and does not constitute a structural limitation of the electronic device 300 .
  • the electronic device 300 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charge management module 340 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 340 may receive charging input from the wired charger through the USB interface 330 .
  • the charging management module 340 may receive wireless charging input through the wireless charging coil of the electronic device 300 . While the charging management module 340 charges the battery 342, it can also provide power to the electronic device through the power management module 341.
  • the power management module 341 is used to connect the battery 342, the charging management module 340 and the processor 310.
  • the power management module 341 receives input from the battery 342 and/or the charging management module 340, and supplies power to the processor 310, internal memory 321, external memory, display screen 394, camera 393, wireless communication module 360, etc.
  • the power management module 341 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 341 may also be provided in the processor 310 . In other embodiments, the power management module 341 and the charging management module 340 can also be provided in the same device.
  • the wireless communication function of the electronic device 300 can be implemented through the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 300 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization. For example: Antenna 1 can be reused as a diversity antenna for a wireless LAN. In other embodiments, antennas may be used in conjunction with tuning switches.
  • the mobile communication module 350 can provide wireless communication solutions including 2G/3G/4G/5G applied to the electronic device 300 .
  • the mobile communication module 350 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), etc.
  • the mobile communication module 350 can receive electromagnetic waves through the antenna 1, perform filtering, amplification and other processing on the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 350 can also amplify the signal modulated by the modem processor and convert it into electromagnetic waves through the antenna 1 for radiation.
  • At least part of the functional modules of the mobile communication module 350 may be disposed in the processor 310 . In some embodiments, at least part of the functional modules of the mobile communication module 350 and at least part of the modules of the processor 310 may be provided in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be sent into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor via audio device (not limited to Microphone 370B etc.) output sound signals, or display images or videos through the display screen 394.
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent of the processor 310 and may be provided in the same device as the mobile communication module 350 or other functional modules.
  • the wireless communication module 360 can provide applications on the electronic device 300 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (bluetooth, BT), and global navigation satellites.
  • WLAN wireless local area networks
  • System global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 360 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 360 receives electromagnetic waves via the antenna 2 , frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 310 .
  • the wireless communication module 360 can also receive the signal to be sent from the processor 310, frequency modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • a communication connection can be established between the source electronic device and the target electronic device through each other's wireless communication modules 360 .
  • the antenna 1 of the electronic device 300 is coupled to the mobile communication module 350, and the antenna 2 is coupled to the wireless communication module 360, so that the electronic device 300 can communicate with the network and other devices through wireless communication technology.
  • Wireless communication technologies can include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC, FM , and/or IR technology, etc.
  • GNSS can include global positioning system (GPS), global navigation satellite system (GLONASS), Beidou navigation satellite system (BDS), quasi-zenith satellite system (quasi-zenith) satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the electronic device 300 implements display functions through a GPU, a display screen 394, an application processor, and the like.
  • the GPU is an image processing microprocessor and is connected to the display screen 394 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 310 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 394 is used to display images, videos, etc.
  • Display 394 includes a display panel.
  • the display panel can use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 300 may include 1 or N display screens 394, where N is a positive integer greater than 1.
  • the electronic device 300 can implement the shooting function through an ISP, a camera 393, a video codec, a GPU, a display screen 394, and an application processor.
  • the ISP is used to process the data fed back by the camera 393. For example, when taking a photo, the shutter is opened, the light is transmitted to the camera sensor through the lens, the optical signal is converted into an electrical signal, and the camera sensor passes the electrical signal to ISP processing, converted into images visible to the naked eye. ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be provided in the camera 393.
  • Camera 393 is used to capture still images or video.
  • the object passes through the lens to produce an optical image that is projected onto the photosensitive element.
  • the photosensitive element can be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • ISP outputs digital image signals to DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other format image signals.
  • the electronic device 300 may include 1 or N cameras 393, where N is a positive integer greater than 1.
  • the user when the user wears the head wearable device 100, the user can use the camera 393 in the mobile phone to take one or more pictures of the user wearing the head wearable device 100. head image.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 300 selects a frequency point, the digital signal processor is used to perform Fourier transform on the frequency point energy.
  • Video codecs are used to compress or decompress digital video.
  • Electronic device 300 may support one or more video codecs. In this way, the electronic device 300 can play or record videos in multiple encoding formats, such as: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • MPEG2 MPEG2, MPEG3, MPEG4, etc.
  • NPU is a neural network (NN) computing processor.
  • NN neural network
  • Intelligent cognitive applications of the electronic device 300 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, etc.
  • the NPU or other processor can be used to perform face detection, face tracking, face feature extraction, image clustering and other operations on the face images in the video stored in the electronic device 300;
  • the facial images in the pictures stored in 300 are subjected to face detection, facial feature extraction and other operations, and the pictures stored in the electronic device 300 are clustered according to the facial features of the pictures and the clustering results of the face images in the video.
  • the external memory interface 320 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 300.
  • the external memory card communicates with the processor 310 through the external memory interface 320 to implement the data storage function. Such as saving music, videos, etc. files in external memory card.
  • Internal memory 321 may be used to store computer executable program code, which includes instructions.
  • the processor 310 executes instructions stored in the internal memory 321 to execute various functional applications and data processing of the electronic device 300 .
  • the internal memory 321 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required for a function (such as a sound playback function, an image playback function, etc.).
  • the storage data area may store data created during use of the electronic device 300 (such as audio data, phone book, etc.).
  • the internal memory 321 can store a 3D posture algorithm, so that when the electronic device 300 obtains a head image of the user wearing the head-mounted wearable device 300, the processor 310 of the electronic device 300 can process it with the help of the 3D posture algorithm.
  • the head image obtains the user's head posture, such as posture angle.
  • the internal memory 321 may include high-speed random access memory and may also include non-volatile storage device, such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • non-volatile storage device such as at least one disk storage device, flash memory device, universal flash storage (UFS), etc.
  • the electronic device 300 can implement audio functions through the audio module 370, the receiver 370A, the microphone 370B, the headphone interface 370C, and the application processor. Such as music playback, recording, etc.
  • the audio module 370 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signals. Audio module 370 may also be used to encode and decode audio signals. In some embodiments, the audio module 370 may be provided in the processor 310 , or some functional modules of the audio module 370 may be provided in the processor 310 .
  • Receiver 370A also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the voice can be heard by bringing the receiver 370A close to the human ear.
  • Microphone 370B also called “microphone” or “microphone” is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can speak close to the microphone 370B with the human mouth and input the sound signal to the microphone 370B.
  • the electronic device 300 may be provided with at least one microphone 370B. In other embodiments, the electronic device 300 may be provided with two microphones 370B, which in addition to collecting sound signals, may also implement a noise reduction function. In other embodiments, the electronic device 300 can also be provided with three, four or more microphones 370B to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions, etc.
  • the headphone interface 370C is used to connect wired headphones.
  • the headphone interface 370C can be a USB interface 330, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 380A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 380A may be disposed on display screen 394.
  • pressure sensors 380A such as resistive pressure sensors, inductive pressure sensors, capacitive pressure sensors, etc.
  • a capacitive pressure sensor may include at least two parallel plates of conductive material.
  • the electronic device detects the strength of the touch operation according to the pressure sensor 380A.
  • the electronic device can also calculate the touch position based on the detection signal of the pressure sensor 380A.
  • touch operations acting on the same touch position but with different touch operation intensities may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to an image or file, it means that the image or file is selected, and the electronic device 300 executes the instruction that the image or file is selected. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the application window, and the touch operation moves on the display screen, an instruction to drag up the application window is executed. For example: when a touch operation with a touch operation intensity less than the first pressure threshold is applied to the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction to create a new short message is executed.
  • the fingerprint sensor 380B is used to collect fingerprints. Electronic devices can use the collected fingerprint characteristics to unlock fingerprints, access application locks, take photos with fingerprints, answer incoming calls with fingerprints, etc.
  • Touch sensor 380C also known as "touch device”.
  • the touch sensor 380C can be disposed on the display screen 394.
  • the touch sensor 380C and the display screen 394 form a touch screen, which is also called a "touch screen”.
  • Touch sensor 380C is used to detect touch operations on or near it.
  • the touch sensor can pass the detected touch operation to the application Use the handler to determine the touch event type.
  • Visual output related to the touch operation may be provided through display screen 394.
  • the touch sensor 380C may also be disposed on the surface of the electronic device at a different location from the display screen 394 .
  • Magnetic sensor 380D includes a Hall sensor.
  • Distance sensor 380E used to measure distance.
  • Electronic device 300 can measure distance via infrared or laser.
  • the electronic device 300 can use the distance sensor 380E to measure distance to achieve fast focusing.
  • the electronic device 300 can use the distance sensor 380E to measure distance to determine the distance between the user's head or a head-mounted device worn by the user and the neutral position displayed on the interface of the electronic device 300 .
  • the proximity light sensor 380F may include, for example, a light-emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 300 emits infrared light through the light emitting diode.
  • Electronic devices use photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the electronic device. When insufficient reflected light is detected, the electronic device can determine that there is no object near the electronic device.
  • Electronic devices can use the proximity light sensor 380F to detect when the user holds the terminal device close to the ear to talk, so that the screen can be automatically turned off to save power.
  • the proximity light sensor 380F can also be used in holster mode, and pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 380G is used to sense ambient light brightness.
  • the electronic device 300 can adaptively adjust the brightness of the display screen 394 according to the perceived ambient light brightness.
  • the ambient light sensor 380G can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 380G can also cooperate with the proximity light sensor 380F to detect whether the electronic device 300 is in the pocket to prevent accidental touching.
  • the infrared sensor 380H, ultrasonic sensor 380I, electric field sensor 380J, etc. are used to assist the electronic device 300 in recognizing air gestures.
  • Inertial sensors 380K may include gyroscopes and accelerometers.
  • a gyroscope sensor is used to determine the movement posture and position posture of electronic equipment.
  • the buttons 390 include a power button, a volume button, etc.
  • Key 390 may be a mechanical key. It can also be a touch button.
  • the electronic device 300 may receive key input and generate key signal input related to user settings and function control of the electronic device 300 .
  • Motor 391 can produce vibration prompts.
  • Motor 391 can be used for vibration prompts for incoming calls and can also be used for touch vibration feedback.
  • touch operations for different applications can correspond to different vibration feedback effects.
  • Acting on touch operations in different areas of the display screen 394, the motor 391 can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • the touch vibration feedback effect can also be customized.
  • the indicator 392 may be an indicator light, which may be used to indicate charging status, power changes, or may be used to indicate messages, missed calls, notifications, etc.
  • the SIM card interface 395 is used to connect the SIM card.
  • the SIM card can be connected to or separated from the electronic device 300 by inserting it into the SIM card interface 395 or pulling it out from the SIM card interface 395 .
  • the electronic device 300 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 395 can support Nano SIM card, Micro SIM card, SIM card, etc.
  • the same SIM card interface 395 can insert multiple cards at the same time. Multiple cards can be of the same type or different types.
  • the SIM card interface 395 is also compatible with different types of SIM cards.
  • SIM card interface 395 External memory cards are also compatible.
  • the electronic device 300 interacts with the network through the SIM card to implement functions such as calls and data communications.
  • the electronic device 300 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 300 and cannot be separated from the electronic device 300 .
  • the structure of the electronic device 200 and the head wearable device 100 shown in FIG. 1 may refer to the structure of the electronic device 300 shown in FIG. 2.
  • the electronic device 200 and the head wearable device 100 may include the electronic device 300. All hardware structures, or include part of the above hardware structures, or have more other hardware structures not listed above, which are not limited in the embodiments of the present application.
  • FIG. 3 shows a software structure block diagram of the electronic device 300 provided by the embodiment of the present application.
  • the software structure of the electronic device 300 may be a layered architecture.
  • the software may be divided into several layers, and each layer has a clear role and division of labor.
  • the layers communicate through software interfaces.
  • the Android system is divided into four layers, from top to bottom: application layer, application framework layer (framework, FWK), Android runtime (Android runtime) and system libraries, and kernel layer.
  • the application layer can include a series of application packages. As shown in Figure 3, the application layer can include cameras, settings, skin modules, user interface (UI), third-party applications, etc. Among them, third-party applications can include WeChat, QQ, gallery, calendar, calls, maps, navigation, WLAN, Bluetooth, music, video, short messages, etc.
  • UI user interface
  • third-party applications can include WeChat, QQ, gallery, calendar, calls, maps, navigation, WLAN, Bluetooth, music, video, short messages, etc.
  • the application framework layer provides an application programming interface (API) and programming framework for applications in the application layer.
  • the application framework layer can include some predefined functions. As shown in Figure 3, the application framework layer can include window manager, content provider, view system, phone manager, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make this data accessible to applications. Said data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, etc.
  • a view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of the electronic device 300 .
  • call status management including connected, hung up, etc.
  • the resource manager provides various resources to applications, such as localized strings, icons, pictures, layout files, video files, etc.
  • the notification manager allows applications to display notification information in the status bar, which can be used to convey notification-type messages and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, etc.
  • the notification manager can also be notifications that appear in the status bar at the top of the system in the form of charts or scroll bar text, such as notifications for applications running in the background, or notifications that appear on the screen in the form of conversation windows. For example, text information is prompted in the status bar, a beep sounds, the electronic device vibrates, the indicator light flashes, etc.
  • Android runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the functional functions that need to be called by the Java language, and the other part is Android core library.
  • the application layer and application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application layer and application framework layer into binary files.
  • the virtual machine is used to perform object life cycle management, stack management, thread management, security and exception management, and garbage collection and other functions.
  • System libraries can include multiple functional modules. For example: surface manager (surface manager), media libraries (media libraries), 3D graphics processing libraries (for example: OpenGL ES), 2D graphics engines (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as static image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, composition, and layer processing.
  • 2D Graphics Engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • the hardware layer can include various types of sensors.
  • the hardware layer of the mobile phone involved in the embodiment of this application includes an inertial measurement unit (IMU), a touch sensor, a camera driver, a display driver, etc.
  • IMU inertial measurement unit
  • touch sensor e.g., a touch sensor
  • camera driver e.g., a camera driver
  • display driver e.g., a display driver
  • the hardware layer of the head-wearable device is an IMU or the like involved in the embodiment of the present application.
  • the hardware layer of the head-mounted device may also involve a display driver.
  • the sensor data can be sent to the system library through the kernel layer.
  • the system library determines the current device posture of the mobile phone based on the sensor data.
  • the system library layer can determine the attitude angle of the mobile phone in the geodetic coordinate system.
  • the image sensor such as the front camera
  • the image data can be sent to the system library through the kernel layer.
  • the system library determines the attitude angle of the user's face relative to the mobile phone based on the image data.
  • the mobile phone determines the attitude angle of the user's head in the geodetic coordinate system based on the attitude angle of the user's face relative to the mobile phone and the device attitude angle of the mobile phone.
  • the following takes the first electronic device 100 as a mobile phone and the second electronic device 200 as a Bluetooth headset as an example.
  • the method for determining the head posture provided by the embodiment of the present application is first described in detail with reference to the drawings and application scenarios.
  • FIG. 4 it is a schematic diagram of a sports and health software displayed on a mobile phone.
  • the interface shown in (b) in Figure 4 can be displayed to detect and correct The user's head posture.
  • the interface shown in (b) in Figure 4 displays the user's head measured at different times. posture. For example, the user's head posture from 9:01 to 9:02 is tilted to the left, the head posture from 9:30 to 9:35 is bowed, and the head posture from 11:00 to 11:01 is tilted to the right.
  • a head posture detection control 401 is also displayed in the interface shown in (b) of Figure 4 .
  • Head posture detection is used to detect whether the current user's head posture is in the standard position.
  • the mobile phone can enter the shooting interface as shown in (c) of Figure 4 to prompt the user to collect a head image and obtain the user's head image.
  • the mobile phone can also display a prompt message "Please keep the user's head still" or issue a voice prompt message.
  • a prompt message "Please keep the user's head still" or issue a voice prompt message.
  • it is the user’s profile collected by the mobile phone. Head image.
  • the interface shown in (d) in Figure 4 also displays a head posture correction control 402. The user can choose to trigger the head posture correction control 402 to correct the head posture, or return to the display interface of the sports health software through the return control. .
  • the mobile phone When the mobile phone detects that the head posture correction control 402 is triggered, the mobile phone sends a request to obtain the device posture parameters to the Bluetooth headset connected to the mobile phone. This can trigger the Bluetooth headset to use its own inertial sensor to detect the device of the Bluetooth headset. Attitude parameters. After the Bluetooth headset receives the request to obtain the device attitude parameters, it can detect the device attitude parameters of the Bluetooth headset and report them to the mobile phone. At the same time, the mobile phone obtains the head posture parameters of the user when wearing the Bluetooth headset. When the mobile phone obtains the user's head image, the mobile phone processes the head image to obtain the user's head posture parameters. After that, the mobile phone can The head posture parameters and the device posture parameters reported by the Bluetooth headset during the same period are used to obtain the corrected head posture parameters of the user. Optionally, when the corrected head posture parameters are obtained, the mobile phone can also display the corrected head posture parameters.
  • the interface shown in (b) of Figure 4 can also display the number of times the user lowers his head detected in the recent period and the duration of each lowering of the head.
  • the interface can display the longest duration of lowering the head once, or The length of bowing before the current moment.
  • the method includes:
  • Step 501 The first electronic device obtains the user's first head posture parameter.
  • the first head posture parameter may be the posture angle of the head, or other parameters that may be used to reflect the head posture.
  • the posture angle of the head is used to reflect the angle at which the user's head deviates from the reference coordinate system.
  • the angle at which the user's head deviates from the reference coordinate system can be regarded as the user's head posture.
  • the reference coordinate system may be a world coordinate system, or a coordinate system based on the image acquisition device (such as a camera) of the first electronic device.
  • the head posture can instruct the user's head to tilt to the left or right, the user to raise or lower his head, the user's head to turn left and right, etc.
  • the head posture can also reflect the angle of the user's head tilting left and right or the angle of raising and lowering the head.
  • the world coordinate system is the absolute coordinate system of the system, and the posture of the user's head is the position and posture angle relative to the coordinate axis of the absolute coordinate system.
  • the coordinate system based on the image acquisition device of the first electronic device is also called the camera coordinate system.
  • the position and attitude angle of the user's head in the captured image can be obtained through the camera of the first electronic device.
  • the first electronic device acquires the first head posture when the user wears the head-mounted device through an image.
  • the user obtains an image of the head-mounted device on the head through a first electronic device (such as a mobile phone) with an image acquisition device, and then obtains the head posture or parameters reflecting the head posture based on the image.
  • a first electronic device such as a mobile phone
  • the coordinate system is shown to be the Y axis 602 .
  • the mobile phone can track the head and neck based on face recognition technology, calibrate the actual central axis of the side of the head, and use the central axis of the side of the head (such as line 603) and The angle between the Y axis 602 determines the attitude angle 604 of the head.
  • the coordinate system that is, the X axis 605 and the Y axis 606 is displayed.
  • FIG. 6 shows the angle between the central axis of the user's head (such as line 607) and the vertical coordinate axis (Y-axis 606), which is the posture angle 608 of the head. Taking the direction indicated by the arrow in the figure as the right side, you can see that the user's head is shifted to the right.
  • the acquired image is a side image of the user's head.
  • the user uses a mobile phone to take a picture of the side of the head. It is worth mentioning that side shooting with a mobile phone can be assisted by other users, or by fixing the mobile phone to complete the shooting of the side image of the head.
  • the attitude angle 604 of the head is determined.
  • the acquired image is a frontal image of the user's head
  • the user uses a mobile phone to capture the frontal view of the head. Since it is the front of the head, the image collection can be completed through the front camera of the mobile phone.
  • (b) in Figure 6 shows the posture angle 608 of the head determined based on the image.
  • Step 502. In the process of obtaining the first head posture parameter, obtain the first device posture parameter of the target electronic device, and the target electronic device is the second electronic device or the first electronic device.
  • the first electronic device is a mobile phone
  • the second electronic device is a head-mounted device (such as a Bluetooth headset, smart glasses).
  • the target electronic device may be the first electronic device, that is, the mobile phone.
  • the mobile phone performs the first posture head parameter correction
  • the first device posture parameter of the head-worn device is obtained, and the corrected first device posture parameter is obtained by combining the first device posture parameter of the head-worn device and the first head posture parameter obtained by the mobile phone.
  • the user's head posture parameter therefore the target electronic device may be the second electronic device, that is, the head-worn device.
  • the first device attitude parameter may be an attitude angle of the device, or other parameters that may be used to reflect the device attitude.
  • the device attitude parameter of a certain electronic device may be the angle at which the device deviates from the standard attitude.
  • the first electronic device may store standard postures corresponding to different head-worn devices, or the head-worn device may store the standard posture of the head-worn device, so that the head-worn device measures the first position of the head-worn device.
  • the device posture of the head-worn device can be obtained according to the standard posture corresponding to the head-worn device.
  • the device posture parameter of a certain electronic device can also be an angle deviating from a specified coordinate system (such as the world coordinate system).
  • the head-mounted device has a standard attitude angle relative to the user's head.
  • the Bluetooth headset has a standard posture relative to the user's head.
  • the mobile phone will read the standard attitude image of the Bluetooth headset, and the mobile phone will store the standard attitude image.
  • the standard attitude image of the Bluetooth headset will be stored in the mobile phone, and each subsequent connection to the mobile phone will directly call the standard attitude image of the Bluetooth headset.
  • the dotted line part in (a) of Figure 7 is the standard posture image 701 of the headset.
  • the actual posture 702 of the headset will deviate from the standard posture image 701.
  • the angle at which the actually worn Bluetooth headset deviates from the standard posture 701, that is, the posture angle 703, can be regarded as the device posture angle of the Bluetooth headset.
  • the mobile phone obtains the device posture angle of the Bluetooth headset based on the side image of the user's head taken.
  • the device attitude angle of the smart glasses is obtained from an image of the user's head wearing the smart glasses taken from the side.
  • the temples of the smart glasses have a standard posture 704 (the dotted line part in the figure) relative to the user's head.
  • the actual posture 705 of the temples will deviate from the standard posture 704.
  • the angle 706 at which the actual posture 705 deviates from the standard posture 704 can be used as the device posture of the smart glasses. , therefore, the hand The machine obtains the device attitude angle of the smart glasses through the side image of the user's head taken.
  • smart glasses can choose two angles to obtain the device posture. You can also obtain the device posture of the smart glasses through the user's frontal image, that is, the frame of the smart glasses. As shown in (c) in Figure 7, an image of the user's head wearing smart glasses is taken from the front.
  • the frame of the smart glasses has a standard posture 707 (the dotted line part in the figure) relative to the user's head.
  • the actual posture 708 of the frame (the solid line part in the figure) will deviate from the standard posture 707.
  • the angle 709 at which the actual posture 708 deviates from the standard posture 707 can be used as the device posture of the smart glasses. Therefore, the mobile phone captures
  • the frontal image of the user's head is used to obtain the device attitude angle of the smart glasses.
  • the standard posture corresponding to the head-worn device in the second electronic device may be obtained by the second electronic device from the head-worn device, or may be the third posture.
  • the second electronic device is obtained from the server, and the embodiment of the present application does not limit this.
  • a head image is captured through the first electronic device with an image acquisition device, and the first head posture parameters are obtained through the first electronic device. , also obtain the image of the head-worn device, and calculate the first device posture parameter of the head-worn device.
  • the first head posture parameter and the first device posture parameter of the head-worn device are parameters within the same time period, that is, the time attributes corresponding to the first head posture parameter and the first device posture parameter are the same. This ensures that the data collected in the same time period is used when correcting the user's first head posture parameter.
  • the first device posture parameter and the first head posture parameter of the head-worn device can be data collected at the same time.
  • the first head posture parameter is the user's head posture collected at 10:10:52. Parameters, the first device posture parameters were collected at 10:10:52.
  • the first device posture parameter of the head-worn device and the first head The collection time of the posture parameters can also be within the preset error range.
  • the first head posture parameter is collected at 10:10 minutes and 52 seconds
  • the first device posture is collected at 10:10 minutes and 53 seconds.
  • the first electronic device when it obtains the user's first head posture parameter and the first device posture parameter, it can also obtain time information corresponding to the first head posture parameter and the first device posture. The time information corresponding to the parameter.
  • Step 503 The electronic device obtains a target head posture parameter based on the first head posture parameter and the first device posture parameter, and the target head posture parameter is the corrected head posture parameter of the user.
  • the above step 503 can be implemented in the following manner: the first electronic device obtains the posture parameter difference according to the first head posture parameter and the first device posture parameter.
  • the first electronic device updates the first head posture parameter according to the posture parameter difference to obtain the target head posture parameter when the user wears the head wearable device.
  • the first electronic device updates the first head posture parameter according to the posture parameter difference to obtain the target head posture parameter, specifically: the first electronic device adds the posture parameter difference and the first head posture parameter, Get the target head posture parameters.
  • the first electronic device can determine the target head posture parameters according to the target head posture parameters. Determine the user's actual head posture, for example, tilt 20° to the left or 10° to the right, or lower the head.
  • the solution of this application corrects the user's first head posture parameter by obtaining the first device posture parameter of the head wearable device and the user's first head posture parameter when the user wears the head wearable device, so that the user wearing the head wearable device can
  • the user's head posture parameters are corrected in real time to obtain target head posture parameters that are closer to the user's true head posture.
  • the head posture parameters will not be worn due to differences in the user's head. Large errors caused by differences in wearable device habits also make subsequent applications that operate based on head posture more accurate.
  • the first electronic device can also determine whether the user is in a lowered head state based on the target head posture parameters. When the user is in a lowered head state and the lowered head time exceeds the preset time, If the time is longer, the first electronic device can also prompt the user to adjust the head posture parameters, such as raising the head. Alternatively, when the user's head offset is determined based on the user's target head posture parameters, the first electronic device may also prompt the user to adjust the head posture, such as reminding the user to offset the head to the left so that the head is neutral. . The embodiments of the present application do not limit this.
  • the method provided by the embodiment of the present application may further include before step 501: when the first electronic device determines to detect the user's head posture, the first electronic device Prompt information is displayed, the prompt information is used to indicate whether to correct the user's head posture parameter.
  • the first electronic device can perform steps 501 to 503.
  • the first electronic device detects indication information triggered by the user indicating that there is no need to correct the head posture
  • the first electronic device may use the first head posture obtained in step 501 as the user's target head posture.
  • the first electronic device has a head posture detection control. When it is detected that the head posture detection control is triggered, the first electronic device can determine to detect the user's head posture.
  • the method provided by the embodiment of the present application may further include: the first electronic device feeds back the target head posture to the target device, or The target application running in the first electronic device needs to use the first head posture.
  • the method provided by the embodiment of the present application may also include: the first electronic device determines the user based on the target head posture parameter. The number of times and time you lower your head within the target time period (for example, one day, 5 minutes or 2 minutes).
  • head posture can be used in many aspects, such as cervical spine health applications.
  • Head wearable devices such as smart glasses
  • Smart wearable devices can obtain target head posture parameters and record the number of times and time the user lowers his head every day.
  • Smart wearable devices such as smart bracelets
  • somatosensory applications such as somatosensory games.
  • Users can control the operations in the game by adjusting head movements and performing human-computer interaction with head wearable devices. Correct head posture parameters can improve the sensitivity of somatosensory games. .
  • the above step 501 can be implemented in the following manner: the first electronic device acquires the user's head image when the user wears the head-mounted device. The first electronic device obtains the first head posture parameter when the user wears the head wearable device based on the user's head image.
  • the mobile phone can capture an image of the user's head when the user wears the head-mounted device.
  • the mobile phone usually has an image acquisition device (such as a camera), and user A wears the head-mounted device.
  • user B can use his mobile phone to photograph user A wearing the head-mounted device.
  • the above step 501 can be implemented in the following manner: the mobile phone controls the image acquisition device to acquire an image when the user wears the head-mounted device, and the image at least includes the user's Head image.
  • the mobile phone processes the head image to obtain the first head posture parameter when the user wears the head wearable device.
  • the mobile phone has a 3D posture algorithm, and the mobile phone can use the 3D posture algorithm to process the head image to obtain the first head posture parameters when the user wears the head wearable device.
  • the mobile phone can obtain the The user's head images collected from multiple angles. For example, a mobile phone collects a frontal head image of the user when wearing a head-mounted device, and one or more side head images from different angles.
  • the mobile phone uses the 3D posture algorithm to process each of the above head images to obtain the user's head posture parameters reflected in each head image. Then the mobile phone obtains the first head posture parameter based on the head posture parameters reflected in each head image. Partial attitude parameters. For example, the mobile phone can average the head posture parameters reflected in each head image to obtain the first head posture parameter. For example, when the mobile phone prompts to collect a frontal head image, the user points the mobile phone to the front of the user; when the mobile phone prompts to collect the user's left head image, the user points the mobile phone to the user's left side to collect the user's side head image. part image. It is understandable that during the process of collecting frontal head images and side head images by the mobile phone, the mobile phone can also prompt the user to keep the current head posture unchanged.
  • the method provided by the embodiment of the present application may also include: the mobile phone extracts the head image of the user from the full-body image.
  • the mobile phone can capture an image of the user's head wearing a Bluetooth headset on his left ear.
  • the head image may be the head image of the user wearing the smart glasses.
  • the user can directly obtain the user's head image through the camera software that comes with the mobile phone, and then upload the captured image to the application software to perform the first head posture parameter and the first Obtaining device posture parameters.
  • the head-worn device is smart glasses and user A takes a head image of user A
  • user A clicks as shown in (d) in Figure 4
  • the head posture correction control 402 is used to trigger the mobile phone to enter the shooting interface as shown in (a) of Figure 8 .
  • the shooting interface shown in (a) of Figure 8 user A points the mobile phone at user A who is wearing smart glasses.
  • user A can trigger control 801 to input a shooting instruction to the mobile phone.
  • the mobile phone detects the shooting instruction.
  • the head image shown in (b) in Figure 8 is captured through the camera of the mobile phone.
  • the mobile phone can also display the "retake” control 802 and the “confirm” control 803 when displaying the head image.
  • the mobile phone determines the first head posture parameter based on the captured image.
  • the mobile phone re-enters the interface shown in (a) of Figure 8 and prompts the user to complete the head image collection within a preset time period (such as 10 seconds).
  • the mobile phone after the mobile phone captures the above image, the mobile phone can also feed back the image to the server, so that the server processes the above image to obtain the first head posture of the user when wearing the head-mounted device. parameter. Afterwards, the server can feed back the first head posture parameter when the user wears the head wearable device to the mobile phone.
  • head image can also be taken by user B to trigger the mobile phone, and the embodiment of the present application does not limit this.
  • the mobile phone can also obtain images of the user wearing the head-mounted device from other mobile phones and other devices with image acquisition functions. , the embodiment of the present application does not limit this.
  • the head-mounted device or other wearable device can obtain the image captured by the mobile phone from the mobile phone, and the mobile phone can feedback the image to
  • the head-mounted device or other wearable devices calculate the first head posture parameter or the mobile phone feeds back the first head posture parameter calculated using the image to the head-mounted device or other wearable device.
  • the embodiments of the present application do not limit this.
  • the mobile phone when the mobile phone is photographing the user, since the user is holding the mobile phone to photograph, it is inevitable that the posture of the mobile phone itself will change, for example, the mobile phone will tilt, which will cause The head posture angle calculated by the mobile phone based on the head image taken is inaccurate. Therefore, in this embodiment, after the mobile phone acquires the user's head image and calculates the initial head posture parameters, the mobile phone uses its own inertial sensor to detect the device posture parameters of the mobile phone and sends them to the processor of the mobile phone. The processor of the mobile phone The initial head posture is compensated according to the device posture parameter of the mobile phone, and a compensated head posture parameter is finally obtained, that is, the first head posture parameter.
  • the above step 501 can be implemented in the following manner: the first electronic device obtains the first head posture parameter when the user wears the head-mounted device from other devices (such as a mobile phone) , or the image taken by other devices when the user wears the head-mounted device is fed back to the head-mounted device, and the head-mounted device processes the image to obtain the first head view of the user when wearing the head-mounted device.
  • Attitude parameters It can be understood that when the head wearable device performs the above method, the head wearable device can obtain the first information from the mobile phone, and the first information is used to determine the first head posture when the user wears the head wearable device. parameter.
  • the first information may be the first head posture parameter of the user when wearing the head wearable device, which is determined by the mobile phone based on the captured image and provided by the mobile phone to the head wearable device, or it may be provided by the mobile phone to the head wearable device.
  • the embodiment of the present application does not limit the image captured by the mobile phone of the user wearing the head-mounted device.
  • the head-mounted device obtains the first head posture parameter when the user wears the head-mounted device from other devices, or in the above image, the head-mounted device obtains the first head posture parameter from other devices.
  • Some wearable devices need to establish wireless communication connections with other devices, such as Bluetooth connections, which is not limited in the embodiments of the present application.
  • the head wearable device has a first control.
  • the head wearable device determines that the user's first head posture parameter needs to be corrected.
  • the mobile phone runs a pair of head-worn devices.
  • the interface shown in Figure 9 is the interface of the application program. The user can click the "correction control" on the interface to trigger the head wearable device to determine the user's first head posture parameter that needs to be corrected.
  • the first head posture parameter may also be collected by the head-worn device using its own sensor.
  • the above describes the process of how the first electronic device obtains the first head posture parameter.
  • the following will describe the process of how the first electronic device obtains the first device posture parameter of the second electronic device.
  • the above step 502 can be implemented in the following manner: the mobile phone acquires a head image of the user when wearing the head-mounted device.
  • the mobile phone processes the head image to determine a first device posture parameter of the head-worn device.
  • the first electronic device obtains the first device posture parameter of the head-worn device when the user wears the head-worn device from the device that captured the head image, which can be achieved in the following manner: using the first electronic device
  • the device is a mobile phone
  • the mobile phone communicates with the head-wearing device.
  • the basic attributes of the head-wearing device will be stored in the mobile phone.
  • the outline image of the head-wearing device is also preset into various parameters of the head-wearing device. , so that the mobile phone can obtain the outline image of the connected head-mounted device.
  • the first device attitude parameter of the head-wearable device can be obtained.
  • the head-worn device as a Bluetooth headset, as shown in (a) in Figure 10, when an image of the user wearing a Bluetooth headset on the head is captured, a preset outline of the Bluetooth headset will appear on the display of the mobile phone.
  • the Bluetooth headset actually worn by the user may not coincide with the preset shape outline, which means that there is a device attitude angle.
  • the processor of the mobile phone will calculate the deviation angle of the Bluetooth headset from the preset shape through an algorithm. Among them, the calculation used
  • the algorithm can be image tracking technology, which is not limited here.
  • the outline image can be rotated to coincide with the actual Bluetooth headset worn.
  • the angle of rotation is the device attitude angle of the Bluetooth headset, that is, the first device attitude parameter of the head-worn device.
  • the method provided by the embodiment of the present application may also include: the user sets in the mobile phone which component of the head-worn device is used to determine the first device posture parameter of the head-worn device.
  • the user selects the attitude angle of the temples of the smart glasses as the first device attitude parameter of the smart glasses, as shown in (b) in Figure 10.
  • the attitude angle of the temples of the smart glasses As shown in (b) in Figure 10.
  • a preset outline image of the temples of the smart glasses will appear on the mobile phone display.
  • the outline image of the temples of the smart glasses can be rotated to match the actual wearing image.
  • the temples 1002 of the smart glasses overlap, and the rotation angle is the device attitude angle of the smart glasses, that is, the first device attitude parameter of the head-worn device.
  • the first electronic device obtains the first device posture parameter of the head-worn device when the user wears the head-worn device from the device that captured the head image, which can be implemented in the following manner two: using the first electronic device
  • the mobile phone processor uses the preset standard line of the head-worn device as the standard.
  • the preset standard line can be rotated to match the standard of the actual head-worn device.
  • the lines coincide and the angle of rotation is the device attitude angle 1001 of the head-worn device, which is the first device attitude parameter of the head-worn device.
  • the head-worn device as a Bluetooth headset, as shown in (c) in Figure 10 .
  • a preset standard line 1005 will appear on the display of the mobile phone.
  • the preset standard line 1005 is based on the edge of the long handle frame of the Bluetooth headset (the solid line in the figure).
  • the preset standard line 1005 can be rotated to coincide with the actual standard line 1004 (the dotted line in the figure).
  • the angle of rotation is the device attitude angle 1003 of the Bluetooth headset, that is, the first device attitude parameter of the head-worn device.
  • the above step 502 can be implemented in the following manner: the mobile phone obtains from the head wearable device the information collected by the head wearable device when the user wears the head wearable device.
  • the first device attitude parameter of the head-mounted device is a mobile phone with an image acquisition device (such as a camera).
  • the mobile phone obtains the first device attitude parameter of the head-wearable device from the head-wearing device.
  • the mobile phone triggers the head-wearing device to report to the mobile phone that the user is wearing the head-wearing device.
  • the first device posture parameter of the head wear device When the head wears the device, the first device posture parameter of the head wear device.
  • the mobile phone runs an interface as shown in Figure 11.
  • the user can click the "Bluetooth device posture” control 1101 as shown in Figure 11.
  • the mobile phone sends an instruction to query the posture parameters of the first device to the head-worn device through the wireless communication connection with the head-worn device.
  • the head-worn device uses The own sensor collects the first device posture parameter of the head-worn device, and then reports the collected first device posture parameter of the head-worn device to the mobile phone.
  • the above example takes the mobile phone triggering the head-worn device to report the first device posture parameter of the head-worn device to the mobile phone.
  • the head-worn device can also actively report the first device posture parameter of the head-worn device to the mobile phone.
  • the first device posture parameter of the head wearable device can be collected regularly or from time to time, and then the collected first device posture parameter of the head wearable device can be collected.
  • a device attitude parameter is sent to the mobile phone.
  • the head-mounted device can collect the first device posture parameters according to a preset cycle or every time. Then the head-mounted device can feed back the first device posture parameters to the mobile phone regularly or under the trigger of the mobile phone or every time the first device posture parameters are collected. This is not limited in the embodiments of the present application.
  • the mobile phone establishes a communication connection with the head wearable device, and when the user wears the head wearable device, the mobile phone can periodically obtain the information collected by the head wearable device from the head wearable device.
  • the first device attitude parameter is a parameter that specifies the orientation of the head wearable device.
  • the head wearable device having a sensor (such as an IMU) that measures the posture parameters of the first device as an example
  • the head wearable device controls the IMU.
  • the head-mounted device can calculate through the internal three-axis gravity distribution, or through the three-axis gravity distribution calculation and gyroscope fusion Calculate and determine the first device posture parameter of the head-mounted device when the user wears the head-mounted device.
  • the head-wearable device can send the first device attitude parameter to the mobile phone based on the trigger of the mobile phone, or the head-wearable device detects a change in the first device attitude parameter. In this case, the re-collected first device posture parameters of the head-mounted device are then sent to the mobile phone.
  • the embodiments of the present application do not limit this.
  • the mobile phone cannot know whether the user has placed the head in a neutral position as required. Or, the user cannot be sure whether he or she is in a standard neutral position.
  • the self-perceived neutral position of many users with posture problems is actually skewed. If the user's head image is not collected when the user's head is in the neutral position when the mobile phone is shooting, then the first head calculated based on the image will be The head posture parameters may also be inaccurate.
  • the embodiment of the present application uses the mobile phone Before taking an image of the user wearing the head wearable device, the method provided by the embodiment of the present application may also include: the mobile phone detects whether the user's head is in a neutral position, and the mobile phone camera will calibrate a neutral position according to the reference coordinates of the camera itself, and then detect the user's head. When the head is photographed, the mobile phone processor will track the user's head and compare it with the calibrated neutral position to detect whether the user's head is in a neutral position. When the user's head is not in the neutral position, the mobile phone outputs prompt information, which is used to prompt the user to adjust the head to the neutral position.
  • the prompt information may be text prompt information, such as, "Please adjust the head image to a neutral position on the interface", or it may be voice prompt information, which is not limited in the embodiment of the present application.
  • the embodiments of this application do not limit the specific manner of outputting prompt information.
  • it can be voice output, vibration output, indicator light output, or specific sound output (such as buzzer, specific music, long beep, etc.).
  • the output form is voice output, this implementation does not limit the specific content of the voice output. As long as it can remind the user to adjust the head position to a neutral position.
  • the voice content may include head adjustment amplitude, device adjustment amplitude, etc.
  • the method provided by the embodiment of the present application also includes: when the user's head-worn device is not in a neutral position, the mobile phone displays visual guidance on the interface of the mobile phone, and the visual guidance is used to Guide the user to adjust the head-mounted device to a neutral position.
  • the visual guidance may be the difference between the user's head-worn device and the neutral position, or prompt information used to instruct the user to move the head-worn device or how much in which direction. This is not limited in the embodiments of the present application. .
  • visual guidance displays the standard position on the display interface through the smart glasses data built into the application software, and dynamically tracks the smart glasses worn by the user through the camera to display the position of the smart glasses in real time.
  • the neutral position is based on the coordinate system of the image acquisition device of the first electronic device (such as a mobile phone camera) as a standard, combined with the preset standard line of the head-worn device or the coordinates of the reference system of the preset outline image.
  • the first electronic device such as a mobile phone camera
  • the user before the user triggers the mobile phone to obtain the head image of the user wearing the head wearable device, the user triggers the mobile phone to display the shooting interface 1201 as shown in (a) of Figure 12, and then the user will The phone's camera is pointed at the user wearing smart glasses.
  • users wearing smart glasses can use the front camera of the mobile phone to take selfies, and other users can also use the rear camera of the mobile phone to take pictures.
  • Mobile phone cameras include front-facing cameras and rear-facing cameras, which are generally used for selfies and normal shooting. In this embodiment, the purpose is to obtain head images. Therefore, the camera used is not limited.
  • a line 1202 as shown in (b) of Figure 12 can be displayed on the shooting interface.
  • This line 1202 is used to determine the head wear worn by the user. Whether the device is in the specified position (i.e. neutral position).
  • the figure may also display a line 1203, which is the actual position of the user's current head-mounted device. In this way, the user can determine whether the user's current head-mounted device is in a neutral position by comparing lines 1202 and 1203 .
  • the mobile phone can output a voice prompt message, for example, please move the head wearable device to ensure that it is in a neutral position.
  • the user can choose to adjust the user's head wearable device. The user wears the device on his head so that the user's head wears the device in a neutral position, as shown in (d) in Figure 12 .
  • the interface shown in (b) of Figure 12 above displays, because when the smart glasses worn on the user's head are not in the neutral position, no matter whether the adjustment The shooting position of the mobile phone or letting the user being photographed adjust the position of the smart glasses are all to keep the user in a neutral position as much as possible. However, when the user adjusts the smart glasses or moves the mobile phone, it is not possible to move it to the neutral position in one go.
  • the mobile phone The difference between the position and neutral position of the user's smart glasses can also be obtained in real time to mark it on the interface in real time.
  • the difference between the device attitude angle and the neutral position is shown in (c) in Figure 12, thus guiding the user to try to get the neutral position as straight as possible.
  • the user before the user triggers the mobile phone to obtain the head image of the user wearing the head wearable device, the user triggers the mobile phone to display the shooting interface 1301 as shown in (a) of Figure 13 , as shown in (a) of Figure 13 , the shooting interface 1301 displays a line 1303 indicating the neutral position.
  • the shooting interface 1301 may also display a message prompting the user to keep the head during image collection. Prompt message 1302 in neutral position.
  • the mobile phone has an IMU sensor.
  • the IMU sensor in the mobile phone can collect the first device attitude parameter of the mobile phone in real time, and then upload the first device attitude parameter to the mobile phone, or the mobile phone detects that the user needs to be corrected.
  • the IMU sensor in the mobile phone is triggered to detect the device posture parameters of the mobile phone, which is not limited in the embodiments of the present application.
  • a mobile phone when a mobile phone uses a camera to collect a user's head image, it can collect multiple images of the user in the same posture from different angles. For example, taking the user wearing smart glasses as an example, the photographer can Use a mobile phone to capture a frontal image of the user wearing smart glasses and an image of each side, so that the mobile phone can use the frontal image collected and each side image to separately calculate the user's head posture reflected in each image. Then, based on the user's head posture calculated from each image, the final user's first head posture parameter is obtained. Alternatively, the mobile phone uses each image to calculate the device posture of the smart glasses reflected in each image to obtain the final first device posture parameter of the smart glasses.
  • the front image of the user wearing the smart glasses is an image showing the frame of the smart glasses.
  • the determined device posture is the first device posture parameter.
  • the determined device posture is another first device posture parameter.
  • the two first device posture parameters can be used alone as the device posture of the smart glasses.
  • the second electronic device includes a first component and a second component, and obtains The first device posture parameter of the second electronic device includes: obtaining the device posture parameter of the first component and the device posture parameter of the second component.
  • the first device posture parameter of the second electronic device is determined based on the device posture parameter of the first component and the device posture parameter of the second component.
  • obtaining the device posture parameters of the first component and the device posture parameters of the second component includes: obtaining a second image and a third image, and the second image is when the user wears the first component.
  • the third image is the head image of the user wearing the second component.
  • device posture parameters of the first component are determined.
  • device posture parameters of the second component are determined.
  • the second electronic device is a head-mounted device as an example.
  • the head-worn device is a Bluetooth headset
  • the first component is the left earphone
  • the second component is the right earphone
  • the second image is the left side of the user's head wearing the left earphone
  • the third image is the right side of the user wearing the right earphone.
  • Head image When the head-worn device is smart glasses, the first component is the left temple, and the second component is the right temple; the second image is the left side of the head image of the user wearing the smart glasses, and the third image is the image of the user wearing the smart glasses.
  • Right head image When the head-worn device is smart glasses, the first component is the left temple, and the second component is the right temple; the second image is the left side of the head image of the user wearing the smart glasses, and the third image is the image of the user wearing the smart glasses.
  • a head-mounted device generally includes a first component and a second component.
  • the first component and the second component are worn on the head.
  • the device attitude parameters of the first part and the device attitude parameters of the second part are calculated respectively, and the first part is The device posture parameter of the component and the device posture parameter of the second component calculate the first device posture parameter of the entire head-worn device using a preset algorithm.
  • the head-worn device as smart glasses.
  • the first component of the smart glasses is the left temple
  • the second component of the smart glasses is the right temple
  • the preset algorithm is average calculation.
  • the photographer takes an image of the user's side, uses the image on the left side to calculate the attitude angle of the device on the left temple leg as 20°
  • uses the image on the right side to calculate the attitude angle of the device on the right temple leg as 10°.
  • the user can choose whether to accept the calculated posture parameters of the devices on both sides.
  • a selection dialog box 1401 will appear in the interface, as shown in (a) in Figure 14. If the user selects "No", the mobile phone will not perform the next step of average calculation, but will send out prompt message 1402 to prompt the user to adjust the device. As shown in (b) in Figure 14, the user can adjust the glasses legs before proceeding. Image capture; if the user selects "Yes", the phone will perform the next step of average calculation to obtain the final device posture.
  • the head-worn device generally includes a first component and a second component. IMUs are respectively provided in the first component and the second component. The first component and the second component are worn on the head. In the case of different positions, for example, the first component is worn on the user's left ear and the second component is worn on the user's right ear, the head-mounted device can obtain the first component and the second component through the IMUs in the first component and the second component.
  • the device posture parameters of the two components are provided.
  • the head-worn device takes the head-worn device as smart glasses.
  • the first component of the smart glasses is the left temple
  • the second component of the smart glasses is the right temple.
  • IMUs are respectively provided in the left and right temples.
  • smart glasses connect and communicate with electronic devices.
  • the user uses an electronic device, such as a mobile phone, to take a picture of the user's head.
  • the IMU obtains the device posture parameters of the left temple and right temple respectively and transmits them to the mobile phone.
  • the mobile phone calculates the first head posture parameters of the captured head image.
  • the first head posture parameter correction is performed by combining the two device posture parameters.
  • the first electronic device may The IMU in the first component and the second component is selected, and the head-mounted device can measure the device posture of the head-worn device with the specific IMU indicated by the first electronic device according to the instructions of the first electronic device.
  • the first component of the Bluetooth headset is the left earphone
  • the second component of the Bluetooth headset is the right earphone
  • IMUs are respectively provided in the left earphone and the right earphone.
  • the IMU acquires the device postures of the first component and the second component and transmits them to the electronic device
  • the first electronic device will have instruction information for the user to select the first component, or the second component, or the first component and the second component.
  • the device posture parameters obtained by the component perform the first head posture parameter correction.
  • the mobile phone when the IMUs in both the left and right earphones of the Bluetooth headset transmit the obtained device attitude parameters to the mobile phone, the mobile phone will display the interface as shown in Figure 15, and the user can select the left earphone.
  • the IMU data can trigger control 1501; the IMU data of the right earphone can also be selected to trigger control 1502; the data of the left earphone IMU and the right earphone IMU can also be selected at the same time, that is, control 1501 and control 1502 can be turned on at the same time. If you select the data of the left earphone and the right earphone, the mobile phone will process the two data through the preset algorithm to obtain the data of the first device posture parameter.
  • the disclosed apparatus/computer equipment and methods can be implemented in other ways.
  • the apparatus/computer equipment embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods, such as multiple units or components. can be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)

Abstract

本申请公开了一种确定头部姿态的方法以及装置,涉及终端技术领域,应用于第一电子设备中,方法包括:获取用户的第一头部姿态参数。在获取第一头部姿态参数的过程中,获取目标电子设备的第一设备姿态参数,目标电子设备为第二电子设备或第一电子设备。根据第一头部姿态参数和第一设备姿态参数,得到校正后用户的头部姿态参数。本申请方案通过获取的用户的第一头部姿态参数和第二电子设备的第一设备姿态参数,校正用户第一头部姿态参数,可以得到更接近用户真实头部姿态的目标头部姿态参数,使其不会因为用户的头部差异,佩戴头部穿戴设备的习惯差异而造成较大的误差,也使后续根据头部姿态运行的应用的准确性更高。

Description

确定头部姿态的方法以及装置
本申请要求于2022年04月29日提交国家知识产权局、申请号为202210476012.6、申请名称为“确定头部姿态的方法以及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,特别涉及一种确定头部姿态的方法以及装置。
背景技术
智能眼镜、耳机等头部穿戴设备普遍内置有惯性传感器,可以用来检测头部姿态。但在实际应用中存在重要的问题,由于不同人的头部差异导致耳朵高度、耳廓形状不同,而对于智能眼镜、耳机等头戴设备,佩戴习惯也有很大差异,这会造成头部穿戴设备与头部的相对姿态不同,这个差距难以校正,导致无法准确检测头部的姿态,影响后续应用的准确性。
比如,以用户佩戴耳机为例,在用户的头部姿态相同的情况下,如果用户所佩戴的耳机的方式不同,在不同佩戴方式下利用同一个耳机测出来的头部姿态往往也不同。
发明内容
本申请提供了一种确定头部姿态的方法以及装置,用以校正用户的头部姿态,以使得电子设备估计的用户的头部姿态更接近用户的实际头部姿态。
所述技术方案如下:
第一方面,提供了一种确定头部姿态的方法,应用于第一电子设备中,所述方法包括:获取用户的第一头部姿态参数;在获取所述第一头部姿态参数的过程中,获取目标电子设备的第一设备姿态参数,所述目标电子设备为第二电子设备或所述第一电子设备;根据所述第一头部姿态参数和所述第一设备姿态参数,得到目标头部姿态参数,所述目标头部姿态参数为校正后所述用户的头部姿态参数。
本申请实施例中的第一电子设备可以是头部穿戴设备,或者手机等。在第一电子设备为除头部穿戴设备外的设备时,目标电子设备为第二电子设备。在第一电子设备为头部穿戴设备外的设备时,目标电子设备为第一电子设备。
本申请方案中第一电子设备通过获取用户的第一头部姿态参数,以及第二电子设备的第一设备姿态参数,校正用户第一头部姿态参数,可以使得用户的第一头部姿态参数得到校正,得到更接近用户真实头部姿态的目标头部姿态参数。通过对第一头部姿态参数进行校正使其不会因为用户的头部差异,佩戴头部穿戴设备的习惯差异而造成较大的误差,也使后续根据头部姿态运行的应用的准确性更高。
在本申请的一个可能的实现方式中,第一电子设备获取用户的第一头部姿态参数包括:第一电子设备获取用户的头部图像。第一电子设备根据用户的头部图像,得到用户的第一头部姿态参数。
在本申请的一个可能的实现方式中,目标电子设备为第二电子设备,所述用户的 头部图像由第一电子设备采集,第一电子设备还包括第一传感器,本申请实施例提供的方法还包括:第一电子设备通过第一传感器获取第一电子设备在第一时间段内的第二设备姿态参数,第一时间段为第一电子设备采集用户的头部图像的时间段。第一电子设备根据用户的头部图像,得到所述用户的第一头部姿态参数,包括:第一电子设备根据用户的头部图像,得到初始头部姿态参数。第一电子设备根据初始头部姿态参数和第二设备姿态参数,得到第一头部姿态参数。
在本申请的一个可能的实现方式中,目标电子设备为第二电子设备,所述用户的头部图像由第一电子设备采集的情况下,本申请实施例提供的方法还可以包括:第一电子设备在检测到确定用户头部姿态的情况下,通过第一电子设备的图像采集部件(比如摄像头)采集用户佩戴第二电子设备时的头部图像。
在本申请的一个可能的实现方式中,第一电子设备获取所述用户的头部图像,包括:在满足检测头部姿态参数的触发条件的情况下,触发第三电子设备采集用户的头部图像,以及从所述第三电子设备处获取由所述第三电子设备采集到的所述用户的头部图像。比如,第一电子设备为手机或者头部穿戴设备,则第一电子设备可以触发除第一电子设备外的其他设备采集用户的头部图像。
在本申请的一个可能的实现方式中,第二电子设备为头部穿戴设备,第一电子设备获取所述第二电子设备的第一设备姿态参数,包括:第一电子设备获取所述用户的第一图像,所述第一图像为所述用户佩戴所述头部穿戴设备时的头部图像;第一电子设备根据所述第一图像,确定第二电子设备的第一设备姿态参数。该方案中可以实现第一电子设备借助第一图像分析得到第二电子设备的第一设备姿态参数。
在本申请的一个可能的实现方式中,第二电子设备为头部穿戴设备,第二电子设备中具有第二传感器,第二传感器用于采集所述第二电子设备的第一设备姿态参数,第一电子设备获取第二电子设备的第一设备姿态参数,包括:第一电子设备接收来自第二电子设备的第一设备姿态参数。该方案可以实现由第二电子设备自行利用第二传感器测量的第一设备姿态参数并上传给第一电子设备。
在本申请的一个可能的实现方式中,在第一电子设备接收来自第二电子设备的第一设备姿态参数之前,本申请实施例提供的方法还包括:第一电子设备触发第二电子设备采集第二电子设备的第一设备姿态参数。比如,第一电子设备可以通过与第二电子设备之间的通信连接向第二电子设备发送采集指令,该采集指令用于触发第二电子设备采集并上报第一设备姿态参数。
在本申请的一个可能的实现方式中,第二电子设备包括第一部件和第二部件,第一电子设备获取第二电子设备的第一设备姿态参数,包括:第一电子设备获取第一部件的设备姿态参数,以及第二部件的设备姿态参数。第一电子设备根据第一部件的设备姿态参数和第二部件的设备姿态参数,确定第二电子设备的第一设备姿态参数。
在本申请的一个可能的实现方式中,第一电子设备获取第一部件的设备姿态参数,以及第二部件的设备姿态参数,包括:第一电子设备获取第二图像和第三图像,所述第二图像为所述用户佩戴所述第一部件时的头部图像,所述第三图像为所述用户佩戴所述第二部件时的头部图像。第一电子设备根据第二图像,确定第一部件的设备姿态参数。第一电子设备根据第三图像,确定第二部件的设备姿态参数。在第一电子设备 为头部穿戴设备的情况下,第二图像和第三图像可以是手机等图像采集设备拍摄的,然后发给头部穿戴设备。在第一电子设备为除头部穿戴设备的情况下,第一电子设备可以拍摄用户佩戴第二电子设备(即头部穿戴设备)时的第一图像和第二图像。
在本申请的一个可能的实现方式中,第一电子设备获取第二图像和第三图像之前,本申请实施例提供的方法还包括:第一电子设备在所述第一电子设备的显示屏上显示第一控件和第二控件中的至少一个,所述第一控件用于提示采集所述第二图像,所述第二控件用于提示采集所述第三图像。
在本申请的一个可能的实现方式中,第一部件和所述第二部件中分别具有第三传感器,第一电子设备获取所述第一部件的设备姿态参数,以及所述第二部件的设备姿态参数,包括:第一电子设备从所述第二电子设备处获取所述第一部件的第三传感器采集的所述第一部件的设备姿态参数。第一电子设备从所述第二电子设备处获取所述第二部件的第三传感器采集的所述第二部件的设备姿态参数。
在本申请的一个可能的实现方式中,第一电子设备获取用户的第一头部姿态参数之前,本申请实施例提供的方法还包括:第一电子设备发出第一提示信息,所述第一提示信息用于判断所述用户的头部是否处于标准位置。
在本申请的一个可能的实现方式中,第一电子设备具有显示屏,所述第一提示信息显示在所述显示屏上,本申请实施例提供的方法还包括:在所述显示屏上显示所述用户当前的头部位置与所述标准位置之间的距离。
第二方面,提供了一种电子设备,包括处理器,所述处理器与存储器耦合,所述处理器用于执行所述存储器中存储的计算机程序或指令,以使得所述电子设备实现上述的确定头部姿态的方法。
第三方面,提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行上述的确定头部姿态的方法。
可以理解的是,上述第二方面、第三方面的有益效果可以参见上述第一方面中的相关描述,在此不再赘述。
附图说明
图1是本申请实施例提供的一种确定头部姿态的系统;
图2是本申请实施例提供的一种电子设备结构示意图;
图3是本申请实施例提供的电子设备的软件结构框图;
图4是本申请实施例提供的一种运动健康软件示意图;
图5是本申请实施例提供的一种确定头部姿态的方法流程图;
图6是本申请实施例提供的坐标系参考示意图;
图7是本申请实施例提供的头部穿戴设备的设备姿态角示意图;
图8是本申请实施例提供的手机拍摄界面以及选择界面示意图;
图9是本申请实施例提供的一种手机连接的显示界面示意图;
图10是本申请实施例提供的第二设备姿态参数示意图;
图11是本申请实施例提供的一种蓝牙设备姿态的配对界面示意图;
图12是本申请实施例提供的视觉引导显示界面示意图;
图13是本申请实施例提供的提示调整头部的显示界面示意图;
图14是本申请实施例提供的两侧设备姿态角的选择提示界面示意图;
图15是本申请实施例一种左右侧蓝牙耳机连接的显示界面示意图。
具体实施方式
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一部件和第二部件仅仅是为了区分不同的部件,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。
本申请实施例提供了一种确定头部姿态的方法,该方法可以适用于任何电子设备,诸如手机、平板电脑、可穿戴设备(例如,手表、手环、智能头盔等)、车载设备、智能家居、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)等。本申请实施例提供的确定头部姿态的方法中,第一电子设备可以获取用户的第一头部姿态参数,另外,第一电子设备获取第二电子设备的第一设备姿态参数,然后根据第二电子设备的第一设备姿态参数以及用户的第一头部姿态参数,得到用户佩戴第二电子设备时的目标头部姿态参数,以校正用户佩戴第二电子设备时的头部姿态参数。其中,在本申请实施例中,第一电子设备以手机为例,第二电子设备以头部穿戴设备为例(如蓝牙耳机、智能眼镜等),以下实施例中第二电子设备即为头部穿戴设备。该方法一定程度上增强了电子设备的智能化程度,有助于纠正用户的不良使用习惯,提升了用户体验。
在对本申请实施例进行详细地解释说明之前,先对本申请实施例的应用场景予以说明。
如图1所示,图1为本申请实施例提供的一种确定头部姿态的系统,该系统包括:第一电子设备100以及第二电子设备200,该第一电子设备100和第二电子设备200可以通过无线通信技术建立并保持无线连接。
作为一种示例,第一电子设备100可以是具有显示屏或者图像采集器件(比如摄像头)的手机、平板电脑、笔记本电脑、无线终端设备等。
作为一种示例,第二电子设备200可以为头部穿戴设备,如智能眼镜,耳机(比如蓝牙耳机)中的一个或多个。
可选的,上述无线通信技术可以是蓝牙(bluetooth,BT),例如传统蓝牙或者低功耗蓝牙(bluetooth low energy,BLE),或者通用2.4G/5G频段无线通信技术等。
可选的,该系统还可以包括第三电子设备,该第三电子设备具有图像采集功能,比如图像采集设备,用于采集用户的头部图像,以辅助第一电子设备100确定用户的第一头部姿态参数。或者图像采集设备用于采集用户佩戴头部穿戴设备时的图像,以辅助第一电子设备100确定头部穿戴设备的设备姿态参数。
示例性的,以第二电子设备200为蓝牙耳机为例进行说明,蓝牙耳机可以有多种类型,例如可以是耳塞式、入耳式等。蓝牙耳机可以包括分别佩戴于用户左耳和右耳的第一部分和第二部分。其中,第一部分和第二部分可以通过连接线相连,例如颈带式蓝牙耳机;也可以是相互独立的两部分,例如真无线立体声(true wireless stereo,TWS)耳机。
本申请中,蓝牙耳机为支持蓝牙通信协议的耳机。其中,蓝牙通信协议可以为传统蓝牙协议,还可以为BLE低功耗蓝牙协议;当然,还可以是未来推出的其他新的蓝牙协议类型。
示例性的,图2示出了电子设备300的一种结构示意图。电子设备300可以包括处理器310,外部存储器接口320,内部存储器321,通用串行总线(universal serial bus,USB)接口330,充电管理模块340,电源管理模块341,电池342,天线1,天线2,移动通信模块350,无线通信模块360,音频模块370,受话器370A,麦克风370B,耳机接口370C,传感器模块380,按键390,马达391,指示器392,1~N个摄像头393,1~N个显示屏394,以及用户标识模块(subscriber identification module,SIM)卡接口395等。其中传感器模块380可以包括压力传感器380A,指纹传感器380B,触控传感器380C,磁传感器380D,距离传感器380E,接近光传感器380F,环境光传感器380G,红外传感器380H,超声波传感器380I、电场传感器380J以及惯性传感器380K等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备300的具体限定。在本申请另一些实施例中,电子设备300可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。如第一电子设备100和第二电子设备200都属于电子设备300中的一种。
处理器310可以包括一个或多个处理单元,例如:处理器310可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备300的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器310中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器310中的存储器为高速缓冲存储器。该存储器可以保存处理器310刚用过或循环使用的指令或数据。如果处理器310需要再次使用该指令或数据,可从存储器中直接调用。避免了重复存取,减少了处理器310的等待时间,因而提高了系统的效率。
在一些实施例中,处理器310可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器310可以包含多组I2C总线。处理器310可以通过不同的I2C总线接口分别耦合惯性传感器380K,充电器,闪光灯,摄像头393等。例如:处理器310可以通过I2C接口耦合触摸传感器380K,使处理器310与惯性传感器380K通过I2C总线接口通信,实现电子设备300的触控功能。
I2S接口可以用于音频通信。在一些实施例中,处理器310可以包含多组I2S总线。处理器310可以通过I2S总线与音频模块370耦合,实现处理器310与音频模块370之间的通信。在一些实施例中,音频模块370可以通过I2S接口向无线通信模块360传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块370与无线通信模块360可以通过PCM总线接口耦合。
在一些实施例中,音频模块370也可以通过PCM接口向无线通信模块360传递音频信号,实现通过蓝牙耳机接听电话的功能。I2S接口和PCM接口都可以用于音频通信。
UART接口是一种通用串行数据总线,用于异步通信。该总线可以为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。
在一些实施例中,UART接口通常被用于连接处理器310与无线通信模块360。例如:处理器310通过UART接口与无线通信模块360中的蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块370可以通过UART接口向无线通信模块360传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器310与显示屏394,摄像头393等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI)等。在一些实施例中,处理器310和摄像头393通过CSI接口通信,实现电子设备300的拍摄功能。处理器310和显示屏394通过DSI接口通信,实现电子设备300的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以被配置为控制信号,也可被配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器310与摄像头393,显示 屏394,无线通信模块360,音频模块370,传感器模块380等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口330是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口330可以用于连接充电器为电子设备300充电,也可以用于电子设备300与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备300的结构限定。在本申请另一些实施例中,电子设备300也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块340用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块340可以通过USB接口330接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块340可以通过电子设备300的无线充电线圈接收无线充电输入。充电管理模块340为电池342充电的同时,还可以通过电源管理模块341为电子设备供电。
电源管理模块341用于连接电池342,充电管理模块340与处理器310。电源管理模块341接收电池342和/或充电管理模块340的输入,为处理器310,内部存储器321,外部存储器,显示屏394,摄像头393,和无线通信模块360等供电。电源管理模块341还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。
在其他一些实施例中,电源管理模块341也可以设置于处理器310中。在另一些实施例中,电源管理模块341和充电管理模块340也可以设置于同一个器件中。
电子设备300的无线通信功能可以通过天线1,天线2,移动通信模块350,无线通信模块360,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备300中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块350可以提供应用在电子设备300上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块350可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块350可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块350还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
在一些实施例中,移动通信模块350的至少部分功能模块可以被设置于处理器310中。在一些实施例中,移动通信模块350的至少部分功能模块可以与处理器310的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于麦克风370B 等)输出声音信号,或通过显示屏394显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器310,与移动通信模块350或其他功能模块设置在同一个器件中。
无线通信模块360可以提供应用在电子设备300上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块360可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块360经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器310。无线通信模块360还可以从处理器310接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。本申请实施例中源电子设备和目标电子设备之间可以通过彼此的无线通信模块360建立通信连接。
在一些实施例中,电子设备300的天线1和移动通信模块350耦合,天线2和无线通信模块360耦合,使得电子设备300可以通过无线通信技术与网络以及其他设备通信。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
电子设备300通过GPU,显示屏394,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏394和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器310可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏394用于显示图像,视频等。显示屏394包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备300可以包括1个或N个显示屏394,N为大于1的正整数。
电子设备300可以通过ISP,摄像头393,视频编解码器,GPU,显示屏394以及应用处理器等实现拍摄功能。
ISP用于处理摄像头393反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给 ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头393中。
摄像头393用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备300可以包括1个或N个摄像头393,N为大于1的正整数。比如说,以电子设备300为手机为例,本申请实施例中在用户佩戴头部穿戴设备100的情况下,用户可以借助手机中的摄像头393拍摄一张或者多张用户佩戴头部穿戴设备100的头部图像。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备300在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备300可以支持一种或多种视频编解码器。这样,电子设备300可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备300的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
在本申请实施例中,NPU或其他处理器可以用于对电子设备300存储的视频中的人脸图像进行人脸检测、人脸跟踪、人脸特征提取和图像聚类等操作;对电子设备300存储的图片中的人脸图像进行人脸检测、人脸特征提取等操作,并根据图片的人脸特征以及视频中人脸图像的聚类结果,对电子设备300存储的图片进行聚类。
外部存储器接口320可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备300的存储能力。外部存储卡通过外部存储器接口320与处理器310通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器321可以用于存储计算机可执行程序代码,可执行程序代码包括指令。处理器310通过运行存储在内部存储器321的指令,从而执行电子设备300的各种功能应用以及数据处理。内部存储器321可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备300使用过程中所创建的数据(比如音频数据,电话本等)等。比如说内部存储器321中可以存储有3D姿态算法,这样电子设备300在获取到用户佩戴头戴式可穿戴设备300的头部图像的情况下,电子设备300的处理器310可以借助3D姿态算法处理该头部图像得到用户的头部姿态,比如姿态角。
此外,内部存储器321可以包括高速随机存取存储器,还可以包括非易失性存储 器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备300可以通过音频模块370,受话器370A,麦克风370B,耳机接口370C,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块370用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块370还可以用于对音频信号编码和解码。在一些实施例中,音频模块370可以设置于处理器310中,或将音频模块370的部分功能模块设置于处理器310中。
受话器370A,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备300接听电话或语音信息时,可以通过将受话器370A靠近人耳接听语音。
麦克风370B,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风370B发声,将声音信号输入到麦克风370B。电子设备300可以设置至少一个麦克风370B。在另一些实施例中,电子设备300可以设置两个麦克风370B,除了采集声音信号,还可以实现降噪功能。在另一些实施例中,电子设备300还可以设置三个,四个或更多麦克风370B,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口370C用于连接有线耳机。耳机接口370C可以是USB接口330,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器380A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器380A可以设置于显示屏394。压力传感器380A的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器380A,电极之间的电容改变。电子设备根据电容的变化确定压力的强度。当有触控操作作用于显示屏394,电子设备根据压力传感器380A检测所述触控操作强度。电子设备也可以根据压力传感器380A的检测信号计算触控的位置。在一些实施例中,作用于相同触控位置,但不同触控操作强度的触控操作,可以对应不同的操作指令。例如:当有触控操作强度小于第一压力阈值的触控操作作用于图像或者文件时,表示该图像或者文件被选中,则电子设备300执行图像或者文件处于被选中的指令。当有触控操作强度大于或等于第一压力阈值的触控操作作用于应用窗口时,且该触控操作在显示屏上移动,则执行将该应用窗口拖起的指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
指纹传感器380B用于采集指纹。电子设备可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触控传感器380C,也称“触控器件”。触控传感器380C可以设置于显示屏394,由触控传感器380C与显示屏394组成触控屏,也称“触控屏”。触控传感器380C用于检测作用于其上或附近的触控操作。触控传感器可以将检测到的触控操作传递给应 用处理器,以确定触控事件类型。可以通过显示屏394提供与触控操作相关的视觉输出。在另一些实施例中,触控传感器380C也可以设置于电子设备的表面,与显示屏394所处的位置不同。
磁传感器380D包括霍尔传感器。
距离传感器380E,用于测量距离。电子设备300可以通过红外或激光测量距离。在一些实施例中,拍摄场景,电子设备300可以利用距离传感器380E测距以实现快速对焦。比如,本申请实施例中,电子设备300可以利用距离传感器380E测距,以确定用户的头部或者用户佩戴的头部穿戴设备与电子设备300的界面上显示的中立位之间的差距。
接近光传感器380F可以包括例如发光二极管(Light-Emitting Diode,LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。电子设备300通过发光二极管向外发射红外光。电子设备使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定电子设备附近有物体。当检测到不充分的反射光时,电子设备可以确定电子设备附近没有物体。电子设备可以利用接近光传感器380F检测用户手持终端设备贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器380F也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器380G用于感知环境光亮度。电子设备300可以根据感知的环境光亮度自适应调节显示屏394亮度。环境光传感器380G也可用于拍照时自动调节白平衡。环境光传感器380G还可以与接近光传感器380F配合,检测电子设备300是否在口袋里,以防误触。
红外传感器380H、超声波传感器380I及电场传感器380J等用于辅助电子设备300进行隔空手势的识别。
惯性传感器380K可以包括陀螺仪和加速度计。例如陀螺仪传感器,用于确定电子设备的运动姿态以及位置姿态。
按键390包括开机键,音量键等。按键390可以是机械按键。也可以是触摸式按键。电子设备300可以接收按键输入,产生与电子设备300的用户设置以及功能控制有关的键信号输入。
马达391可以产生振动提示。马达391可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏394不同区域的触摸操作,马达391也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器392可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口395用于连接SIM卡。SIM卡可以通过插入SIM卡接口395,或从SIM卡接口395拔出,实现和电子设备300的接触和分离。电子设备300可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口395可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口395可以同时插入多张卡。多张卡的类型可以相同,也可以不同。SIM卡接口395也可以兼容不同类型的SIM卡。SIM卡接口395 也可以兼容外部存储卡。电子设备300通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备300采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备300中,不能和电子设备300分离。
应理解,图1所示的电子设备200以及头部穿戴设备100的结构可以参考图2所示的电子设备300的结构,具体的,电子设备200以及头部可穿戴设备100可以包括电子设备300的全部硬件结构,或者包括以上的部分硬件结构,又或者,具有更多的以上没有列举的其他硬件结构,本申请实施例对此不作限定。
图3示出了本申请实施例提供的电子设备300的软件结构框图。如图3所示,电子设备300的软件结构可以是分层架构,例如可以将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层(framework,FWK),安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。如图3所示,应用程序层可以包括相机、设置、皮肤模块、用户界面(user interface,UI)、三方应用程序等。其中,三方应用程序可以包括微信、QQ、图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层可以包括一些预先定义的函数。如图3所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备300的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的 核心库。应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(media libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。2D图形引擎是2D绘图的绘图引擎。内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。硬件层可以包括各类传感器。
例如,以电子设备300为手机为例,手机的硬件层本申请实施例中涉及的惯性传感器(inertial measurement unit,IMU)、触摸传感器、摄像头驱动以及显示驱动等。
以电子设备300为智能眼镜或者蓝牙耳机等头部穿戴设备为例,该头部穿戴设备的硬件层本申请实施例中涉及的IMU等。
可选的,头部穿戴设备的硬件层中还可以涉及显示驱动。
下面结合本申请实施例的确定头部姿态方法,示例性说明手机的软件以及硬件的工作流程。作为一种示例,硬件层中的传感器(例如重力传感器和惯性传感器)采集到传感器数据后,可以通过内核层将传感器数据发送至系统库。系统库根据所述传感器数据判断手机当前的设备姿态。在一些实施例中,系统库层可以确定手机在大地坐标系中的姿态角。另外,硬件层中的图像传感器(例如前置摄像头)采集到图像数据后,可以通过内核层将图像数据发送至系统库。系统库根据所述图像数据判断用户人脸相对手机的姿态角,最终,手机根据用户人脸相对手机的姿态角和手机的设备姿态角,确定用户头部在大地坐标系中的姿态角。
为了便于理解,下面将以第一电子设备100为手机,第二电子设备200为蓝牙耳机为例,结合附图和应用场景,先对本申请实施例提供的确定头部姿态的方法进行具体阐述。
如图4中的(a)所示,为手机中显示的一种运动健康软件的示意图,用户触发该运动健康软件便可以显示如图4中的(b)所示的界面,以检测并校正用户的头部姿态。比如,在用户佩戴蓝牙耳机,且蓝牙耳机与手机之间建立有通信连接的情况下,如图4中的(b)所示的界面,在该界面上显示有不同时间测量到的用户的头部姿态。比如,用户在9:01~9:02的头部姿态为向左偏,9:30~9:35的头部姿态为低头,在11:00~11:01的头部姿态为向右偏,在图4中的(b)所示的界面中还显示有头部姿态检测控件401。头部姿态检测用于检测当前用户的头部姿态是否处于标准位置。在头部姿态检测控件401被触发的情况下,手机可以进入如图4中的(c)所示的拍摄界面,以提示用户采集头部图像,获取用户的头部图像。
可选的,在采集用户头部图像的过程中,手机还可以显示“请用户保持头部不动”的提示信息或者发出语音提示信息。如图4中的(d)所示,即为手机采集到的用户的 头部图像。图4中(d)所示的界面中还显示有头部姿态校正控件402,用户可以选择触发头部姿态校正控件402进行头部姿态校正,也可以通过返回控件返回至运动健康软件的显示界面。
在手机检测到头部姿态校正控件402被触发的情况下,手机向与该手机通信连接的蓝牙耳机发送获取设备姿态参数的请求,这样便可以触发蓝牙耳机利用自身的惯性传感器检测蓝牙耳机的设备姿态参数。蓝牙耳机收到获取设备姿态参数的请求后,便可以检测蓝牙耳机的设备姿态参数并上报给手机。与此同时,手机获取用户佩戴蓝牙耳机时的头部姿态参数,在手机获取到用户的头部图像的情况下,手机处理该头部图像以得到用户的头部姿态参数,之后手机便可以根据头部姿态参数以及同时段蓝牙耳机上报的设备姿态参数,得到该用户校正后的头部姿态参数。可选的,在得到校正头部姿态参数的情况下,手机还可以显示校正后的头部姿态参数。
可选的,如图4中的(b)所示的界面中还可以显示最近一段时间检测到的用户低头的次数以及每次低头的时长或者,该界面可以显示最长一次低头的时长,或者当前时刻之前的低头时长。
如图5所示,描述了本申请实施例提供的确定头部姿态的方法,该方法包括:
步骤501.第一电子设备获取用户的第一头部姿态参数。
作为一种示例,第一头部姿态参数可以是头部的姿态角,或者其他可以用于反映头部姿态的参数。
其中,头部的姿态角用于反映用户的头部偏离参考坐标系的角度,换言之,可以将用户的头部偏离参考坐标系的角度看作是用户的头部姿态。比如,参考坐标系可以是世界坐标系,也可以以第一电子设备的图像采集装置(比如,摄像头)为基础的坐标系。比如说,头部姿态可以指示用户的头部向左或者向右倾斜,用户抬头或者低头,用户头部左右转动等等。可选的,头部姿态还可以反映用户的头部左右倾斜的角度或者抬头低头的角度。
世界坐标系是系统的绝对坐标系,用户头部的姿态是相对于绝对坐标系坐标轴的位置与姿态角。以第一电子设备的图像采集装置为基础的坐标系,又称为相机坐标系,通过第一电子设备的摄像头可以获取所拍摄的图像中用户头部的位置与姿态角。
作为一种示例,根据相机坐标系,第一电子设备通过图像获取用户佩戴头部穿戴设备时的第一头部姿态。用户通过带有图像采集装置的第一电子设备(比如手机)来获取头部佩戴头部穿戴设备的图像,然后根据图像获取头部姿态或反映头部姿态的参数。
作为一种示例,如图6中的(a)所示以相机坐标系为基础,显示出坐标系为Y轴602。在使用相机获取用户的头部图像时,手机可以根据人脸识别技术,对头部、颈部进行跟踪,标定实际头部侧面的中轴线,通过头部侧面的中轴线(如线条603)与Y轴602的夹角,确定头部的姿态角604。
作为另一种示例,如图6中的(b)所示,以用户的正面为例,以相机坐标系为基础,显示出坐标系即X轴605和Y轴606。在图6中的(b)示出了用户的头部的中轴线(如线条607)与垂直坐标轴(Y轴606)之间的夹角,即为头部的姿态角608。以图中箭头所示方向为右侧,则可以看出用户的头部向右偏移。
作为一种示例,获取的图像为用户的头部侧面图像。用户使用手机对头部侧面进行拍摄。值得说明的是,用手机进行侧面拍摄可以通过其他用户辅助拍摄,也可以通过固定手机,完成头部侧面图像的拍摄。如图6中的(a)所示,确定头部的姿态角604。
作为另一种示例,获取的图像为用户的头部正面图像,用户使用手机对头部正面进行拍摄。由于是头部正面,因此可以通过手机的前置摄像头完成图像采集,图6中的(b)示出了根据图像确定的头部的姿态角608。
步骤502.在获取第一头部姿态参数的过程中,获取目标电子设备的第一设备姿态参数,目标电子设备为第二电子设备或第一电子设备。
其中,以第一电子设备为手机,第二电子设备为头部穿戴设备(如蓝牙耳机、智能眼镜)为例。手机在获取第一头部姿态参数时,由于用户手持手机,手机可能出现倾斜的情况,因此需要结合手机自身的设备姿态参数,对手机获取的初始头部姿态参数进行补偿,来获得第一头部姿态参数,因此目标电子设备可以是第一电子设备,即手机。当手机进行第一姿态头部参数校正时,获取头部穿戴设备的第一设备姿态参数,结合头部穿戴设备的第一设备姿态参数和手机获取的第一头部姿态参数,得到校正后的用户头部姿态参数,因此目标电子设备可以是第二电子设备,即头部穿戴设备。
作为一种示例,第一设备姿态参数可以是设备的姿态角,或者其他可以用于反映设备姿态的参数。
可以理解的是,某个电子设备的设备姿态参数可以是该设备偏离标准姿态的角度。第一电子设备中可以存储有不同的头部穿戴设备对应的标准姿态,或者头部穿戴设备中存储有该头部穿戴设备的标准姿态,这样头部穿戴设备在测量到头部穿戴设备的第一设备姿态参数时便可以根据头部穿戴设备对应的标准姿态,得到该头部穿戴设备的设备姿态。
或者说某个电子设备的设备姿态参数也可以是偏离指定坐标系(比如世界坐标系的)角度。
作为一种示例,可选的,头部穿戴设备相对于用户头部,有一个标准姿态角。如图7中的(a)所示,以头部穿戴设备为蓝牙耳机为例,蓝牙耳机相对于用户头部有标准姿态。可选的,在蓝牙耳机与手机连接时,手机会读取蓝牙耳机的标准姿态图像,手机会储存标准姿态的规格图像。值得说明的是,蓝牙耳机与手机在完成首次连接后,蓝牙耳机的标准姿态图像会储存在手机中,之后每次连接手机会直接调用蓝牙耳机的标准姿态图像。图7中的(a)中的虚线部分为耳机的标准姿态图像701。但是在实际佩戴耳机时,耳机的实际姿态702会与标准姿态图像701存在偏差。实际穿戴的蓝牙耳机偏离标准姿态701的角度即姿态角703可以看作是蓝牙耳机的设备姿态角,手机根据拍摄的用户头部侧面图像,获取蓝牙耳机的设备姿态角。
作为另一种示例,如图7中的(b)所示,是以头部穿戴设备为智能眼镜为例,从侧面拍摄用户佩戴智能眼镜的头部图像中,获取智能眼镜的设备姿态角。智能眼镜的镜腿相对于用户的头部有标准姿态704(图中的虚线部分)。实际佩戴智能眼镜时,以侧面为例,镜腿的实际姿态705(图中的实线部分)会与标准姿态704存在偏差,实际姿态705偏离标准姿态704的角度706可以作为智能眼镜的设备姿态,因此,手 机通过拍摄的用户头部侧面图像,获取智能眼镜的设备姿态角。
智能眼镜相对于蓝牙耳机,可以选择两个角度来和获取设备姿态还可以通过用户的正面图像,获取智能眼镜的设备姿态,即智能眼镜得镜框。如图7中的(c)是从正面拍摄用户佩戴智能眼镜的头部图像,智能眼镜的镜框相对于用户的头部有标准姿态707(图中的虚线部分)。实际佩戴智能眼镜时,镜框的实际姿态708(图中的实线部分)会与标准姿态707存在偏差,实际姿态708偏离标准姿态707的角度709可以作为智能眼镜的设备姿态,因此,手机通过拍摄的用户头部正面图像,获取智能眼镜的设备姿态角。
在第二电子设备为头部穿戴设备外的设备时,第二电子设备中具有的头部穿戴设备对应的标准姿态可以是第二电子设备从头部穿戴设备处获取到的,也可以是第二电子设备从服务器处获取的,本申请实施例对此不做限定。
在本申请的一个实施例中,在用户佩戴好头部穿戴设备时,通过带有图像采集装置的第一电子设备拍摄头部图像,并通过第一电子设备获取第一头部姿态参数的同时,也获取头部穿戴设备的图像,并计算头部穿戴设备的第一设备姿态参数。
其中,第一头部姿态参数和头部穿戴设备的第一设备姿态参数为同一时间段内的参数,也即第一头部姿态参数和第一设备姿态参数所对应的时间属性相同。这样可以保证对用户的第一头部姿态参数校正时采用的是同一时间段采集的数据。比如,头部穿戴设备的第一设备姿态参数和第一头部姿态参数可以为同一时刻采集的数据,比如,第一头部姿态参数为10点10分52秒采集到的用户的头部姿态参数,第一设备姿态参数为10点10分52秒采集到的。由于在短时间内,如果没有大幅度调整设备姿态的动作,即使用户的头部发生变化,设备姿态变化范围也不是很大,因此,头部穿戴设备的第一设备姿态参数和第一头部姿态参数的采集时刻也可以位于预设误差范围内,比如,第一头部姿态参数为10点10分52秒采集到的,第一设备姿态为10点10分53秒采集到的。
可以理解的是,对于第一电子设备而言,其获取用户的第一头部姿态参数和第一设备姿态参数时还可以获取反映第一头部姿态参数对应的时间信息,以及第一设备姿态参数对应的时间信息。
步骤503.电子设备根据第一头部姿态参数和第一设备姿态参数,得到目标头部姿态参数,目标头部姿态参数为校正后用户的头部姿态参数。
作为一种示例,上述步骤503可以通过以下方式实现:第一电子设备根据第一头部姿态参数和第一设备姿态参数,得到姿态参数差。第一电子设备根据姿态参数差,更新第一头部姿态参数,得到用户佩戴头部穿戴设备时的目标头部姿态参数。比如,第一电子设备根据公式D=Ah-Ad,得到姿态差,其中,Ah表示第一头部姿态参数,Ad表示第一设备姿态参数,D表示姿态参数差。
作为一种示例,第一电子设备根据姿态参数差,更新第一头部姿态参数,得到目标头部姿态参数,具体为:第一电子设备将姿态参数差和第一头部姿态参数相加,得到目标头部姿态参数。比如,第一电子设备根据公式A=Ah+D,得到目标头部姿态参数。其中,A表示目标头部姿态参数。
可选的是,第一电子设备得到目标头部姿态参数后可以根据目标头部姿态参数确 定用户的实际头部姿态,比如,向左偏20°或者向右偏移10°或者低头。
由于不同的用户头部形态不同,因此造成头部穿戴设备相对头部的姿态存在较大的差异。本申请方案通过获取的用户佩戴头部穿戴设备时头部穿戴设备的第一设备姿态参数和用户的第一头部姿态参数,校正用户第一头部姿态参数,可以使得佩戴头部穿戴设备的用户的头部姿态参数实时的得到校正,得到更接近用户真实头部姿态的目标头部姿态参数,通过对第一头部姿态参数进行校正使其不会因为用户的头部差异,佩戴头部穿戴设备的习惯差异而造成较大的误差,也使后续根据头部姿态运行的应用的准确性更高。
在本申请的一个可能的实施例中,第一电子设备在得到目标头部姿态参数后,还可以根据目标头部姿态参数确定用户是否处于低头状态,在用户处于低头状态且低头时间超过预设时长的情况下,第一电子设备还可以提示用户调整头部姿态参数,比如抬头。或者,在根据用户的目标头部姿态参数确定用户头部偏移的情况下,第一电子设备还可以提示用户调整头部姿态,比如提醒用户向左边偏移头部,以使得头部处于中立。本申请实施例对此不做限定。
在本申请的一个可能的实施例中,可选的,本申请实施例提供的方法在步骤501之前还可以包括:在第一电子设备确定检测用户的头部姿态的情况下,第一电子设备显示提示信息,该提示信息用于指示是否校正用户的头部姿态参数,在检测到用户触发的指示校正头部姿态的指示信息的情况下,第一电子设备便可以执行步骤501~步骤503。在第一电子设备检测到用户触发的指示无需校正头部姿态的指示信息的情况下,第一电子设备便可以将步骤501中获取到的第一头部姿态作为用户的目标头部姿态。比如说,第一电子设备中具有头部姿态检测控件,在检测到头部姿态检测控件被触发的情况下,第一电子设备便可以确定检测用户的头部姿态。
在本申请的一个可能的实施例中,在第一电子设备得到目标头部姿态后,本申请实施例提供的方法还可以包括:第一电子设备将该目标头部姿态反馈给目标设备,或者该第一电子设备中运行的需要使用第一头部姿态的目标应用。
可以理解的是,目标设备为需要使用目标头部姿态的设备。比如,目标设备可以是头部穿戴设备,也可以是手机,也可以是除头部穿戴设备或手机外的其他设备,本申请实施例对此不做限定。
或者,在本申请的一个可能的实施例中,在第一电子设备得到目标头部姿态参数后,本申请实施例提供的方法还可以包括:第一电子设备根据该目标头部姿态参数确定用户在目标时间段(比如,一天,5分钟或者2分钟)内的低头次数、低头时间。
举例说明,头部姿态可以应用于多个方面,比如颈椎健康的应用,头部穿戴设备(比如,智能眼镜)获取目标头部姿态参数,可以记录用户每日的低头次数、低头时间,通过其他智能穿戴设备(比如,智能手环),结合用户身体生理参数,提供颈椎健康相关的提醒。还可以应用于体感应用,如体感游戏等,用户结合头部穿戴设备可以通过调整头部动作,进行人机交互,来控制游戏中的操作,正确的头部姿态参数可以提高体感游戏的灵敏性。
下述将从不同方面描述第一电子设备如何获取第一头部姿态参数的过程:
(1)第一电子设备利用头部图像确定第一头部姿态参数的过程。
在本申请的一个可能的实现方式中,上述步骤501可以通过以下方式实现:第一电子设备获取用户佩戴头部穿戴设备时用户的头部图像。第一电子设备根据用户的头部图像,得到用户佩戴头部穿戴设备时的第一头部姿态参数。
举例说明,以第一电子设备为手机为例,手机可以拍摄用户佩戴头部穿戴设备时用户的头部图像,比如,通常手机中具有图像采集器件(比如摄像头),用户A佩戴头部穿戴设备的情况下,用户B可以借助手机拍摄用户A佩戴头部穿戴设备。
以第一电子设备为具有图像采集器件(比如摄像头)的手机为例,上述步骤501可以通过以下方式实现:手机控制图像采集器件采集用户佩戴头部穿戴设备时的图像,该图像至少包括用户的头部图像。手机处理该头部图像,以得到用户佩戴头部穿戴设备时的第一头部姿态参数。
在本申请的一个可能的实施例中,手机中具有3D姿态算法,手机可以利用该3D姿态算法,处理该头部图像,以得到用户佩戴头部穿戴设备时的第一头部姿态参数。其中,在利用3D姿态算法处理头部图像时,为了提高手机利用头部图像所确定的用户的头部姿态的准确性,则利用头部图像确定第一头部姿态参数时,手机可以获取从多个角度采集的用户的头部图像,比如,手机采集用户佩戴头部穿戴设备时的正面头部图像,以及一张或多张角度不同的侧面头部图像。然后手机利用3D姿态算法处理上述每张头部图像,得到每张头部图像所反映的用户的头部姿态参数,然后手机根据每张头部图像所反映的头部姿态参数,得到第一头部姿态参数。比如手机可以将每张头部图像所反映的头部姿态参数求平均,从而得到第一头部姿态参数。比如,在手机提示采集正面头部图像时,用户将手机对准用户的正面,在手机提示采集用户的左侧头部图像时,用户将手机对准用户的左侧,以采集用户的侧面头部图像。可以理解的是,在手机采集正面头部图像和侧面头部图像的过程中,手机还可以提示用户保持当前头部姿态不变。
可选的,在手机所获取到的图像是用户的全身图像时,本申请实施例提供的方法还可以包括:手机从该全身图像中提取用户的头部图像。
可以理解的是,以头部穿戴设备为蓝牙耳机为例,假设用户的左耳佩戴着蓝牙耳机,则手机可以拍摄用户的左耳佩戴着蓝牙耳机的头部图像。如图7中的(c)所示,以头部穿戴设备为智能眼镜为例,头部图像可以是用户佩戴智能眼镜时的头部图像。
在本申请的一个可能的实施例中,用户可以直接通过手机自带的相机软件进行用户头部图像的获取,再将拍摄的图像上传至应用软件中,进行第一头部姿态参数和第一设备姿态参数的获取。
举例说明,以头部穿戴设备为智能眼镜,以用户A拍摄用户的A的头部图像为例,在用户A佩戴智能眼镜的情况下,用户A点击如图4中的(d)所示的头部姿态校正控件402,以触发手机进入如图8中的(a)所示的拍摄界面。在图8中的(a)所示的拍摄界面,用户A将手机对准佩戴智能眼镜的用户A,之后用户A可以触发控件801,以向手机输入拍摄指令,相应的,手机检测到拍摄指令后,通过该手机的摄像头拍摄如图8中的(b)所示的头部图像。可选的,如图8中的(b)所示,手机在显示头部图像的时候还可以显示“重拍”控件802以及“确认”控件803,在检测到“确认”控件803被触发的情况下,手机以所拍摄到的图像确定第一头部姿态参数,在检 测到“重拍”控件802被触发的情况下,手机重新进入如图8中的(a)所示的界面,并提示用户在预设时长(比如10秒)内完成头部图像采集。
在本申请的一个可能的实施例中,手机拍摄到上述图像后,手机还可以将该图像反馈给服务器,以使得服务器处理该上述图像以得到用户佩戴头部穿戴设备时的第一头部姿态参数。之后,服务器可以将该用户佩戴头部穿戴设备时的第一头部姿态参数反馈给手机。
值得说明的是,上述头部图像也可以是用户B为触发手机拍摄得到的,本申请实施例对此不做限定。
在第一电子设备为手机的情况下,手机除了自行拍摄用户佩戴头部穿戴设备时的图像外,手机也可以从其他手机等具有图像采集功能的设备处获取用户佩戴头部穿戴设备时的图像,本申请实施例对此不做限定。
可选的,在第二电子设备为头部穿戴设备或者其他可穿戴设备比如手环时,头部穿戴设备或者其他可穿戴设备可以从手机处获取到手机拍摄的图像,手机可以将图像反馈给头部穿戴设备或者其他可穿戴设备以计算第一头部姿态参数或者手机将利用图像计算到的第一头部姿态参数反馈给头部穿戴设备或者其他可穿戴设备。本申请实施例对此不做限定。
在本申请的一个可能的实施例中,手机在对用户进行拍摄的过程中,由于是用户手持手机进行拍摄,因此不可避免的会存在手机本身的姿态发生变化,例如手机会倾斜,这会导致手机根据拍摄的头部图像计算的头部姿态角不准确。因此,本实施例中,手机在获取了用户的头部图像,计算出初始头部姿态参数后,手机利用自身的惯性传感器检测手机的设备姿态参数并发送至手机的处理器,手机的处理器根据手机的设备姿态参数对初始头部姿态进行补偿,最终得到一个补偿后的头部姿态参数,即第一头部姿态参数。
(2)第一电子设备从其他设备处获取第一头部姿态参数的过程。
以第二电子设备为头部穿戴设备为例,上述步骤501可以通过以下方式实现:第一电子设备从其他设备(比如,手机)处获取用户佩戴头部穿戴设备时的第一头部姿态参数,或者由其他设备拍摄好的用户佩戴头部穿戴设备时的图像后将该图像反馈给头部穿戴设备,由头部穿戴设备处理该图像以得到用户佩戴头部穿戴设备时的第一头部姿态参数。可以理解的是,在由头部穿戴设备执行上述方法时,头部穿戴设备可以从手机处获取到第一信息,该第一信息用于确定用户佩戴头部穿戴设备时的第一头部姿态参数。比如,第一信息可以是手机向头部穿戴设备提供的由手机根据拍摄到的图像确定的用户佩戴头部穿戴设备时的第一头部姿态参数,也可以是手机向头部穿戴设备提供的手机拍摄的用户佩戴头部穿戴设备时的图像,本申请实施例对此不作限定。
可以理解的是,在第二电子设备为头部穿戴设备的情况下,头部穿戴设备从其他设备处获取用户佩戴头部穿戴设备时的第一头部姿态参数,或者上述图像时,该头部穿戴设备需要和其他设备建立无线通信连接,比如,蓝牙连接,本申请实施例对此不作限定。
可选的,头部穿戴设备上具有第一控件,在第一控件被触发的情况下,头部穿戴设备确定需要校正用户的第一头部姿态参数。或者,手机上运行有与头部穿戴设备对 应的应用程序,如图9所示的界面即为该应用程序的界面,用户可以在该界面点击“校正控件”,以触发头部穿戴设备确定需要校正用户的第一头部姿态参数。
或者,第一头部姿态参数也可以为头部穿戴设备利用自身的传感器采集到的。
上述描述了第一电子设备如何获取第一头部姿态参数的过程,下述将描述第一电子设备如何获取第二电子设备的第一设备姿态参数的过程。
以第一电子设备为具有图像采集器件(比如摄像头)的手机为例,上述步骤502可以通过以下方式实现:手机获取用户佩戴头部穿戴设备时的头部图像。手机处理该头部图像以确定头部穿戴设备的第一设备姿态参数。
作为一种示例,第一电子设备从拍摄的头部图像的设备处获取用户佩戴头部穿戴设备时,该头部穿戴设备的第一设备姿态参数,可以通过以下方式一实现:以第一电子设备为手机为例,手机与头部穿戴设备通讯连接,头部穿戴设备的基本属性会储存至手机中,其中将头部穿戴设备的轮廓外形图像也预置入头部穿戴设备的各种参数中,这样手机中可以获取到已连接的头部穿戴设备的轮廓外形图像。根据头部穿戴设备的轮廓外形图像与实际拍摄的用户佩戴头部穿戴设备的图像中头部穿戴设备的实际位置,可以得到头部穿戴设备的第一设备姿态参数。
举例说明,以头部穿戴设备为蓝牙耳机为例,如图10中的(a)所示,在拍摄到用户的头部佩戴蓝牙耳机的图像时,手机显示中会出现预置的蓝牙耳机轮廓外形图像,用户实际佩戴的蓝牙耳机可能会与预置的外形轮廓不重合,即表示存在设备姿态角,手机的处理器会通过算法计算出蓝牙耳机与预置外形的偏离角度,其中,计算所用的算法可以是图像跟踪技术,在此处不做限定。在佩戴蓝牙耳机的图像中,轮廓外形图可以通过旋转与实际佩戴的蓝牙耳机重合,旋转的角度就是蓝牙耳机的设备姿态角,即头部穿戴设备的第一设备姿态参数。
可选的,在确定设备姿态参数之前,本申请实施例提供的方法还可以包括:用户在手机中设置以头部穿戴设备的哪个部件确定该头部穿戴设备的第一设备姿态参数。
举例说明,以头部穿戴设备为智能眼镜为例,用户选择智能眼镜的镜腿的姿态角作为智能眼镜的第一设备姿态参数,如图10中的(b)所示,在拍摄到头部佩戴智能眼镜的图像时,手机显示中会出现预置的智能眼镜的镜腿的轮廓外形图像,在获取的佩戴智能眼镜的图像中,智能眼镜的镜腿的轮廓外形图可以通过旋转与实际佩戴的智能眼镜的镜腿1002重合,旋转的角度就是智能眼镜的设备姿态角,即头部穿戴设备的第一设备姿态参数。
作为一种示例,第一电子设备从拍摄的头部图像的设备处获取用户佩戴头部穿戴设备时,该头部穿戴设备的第一设备姿态参数,可以通过以下方式二实现:以第一电子设备为手机为例,手机处理器以头部穿戴设备的预设标准线为准,在获取的佩戴头部穿戴设备的图像中,预设标准线可以通过旋转与实际的头部穿戴设备的标准线重合,旋转的角度就是头部穿戴设备的设备姿态角1001,即头部穿戴设备的第一设备姿态参数。
举例说明,以头部穿戴设备为蓝牙耳机为例,如图10中的(c)所示。在拍摄到头部佩戴蓝牙耳机的图像时,手机显示中会出现预设标准线1005,预设标准线1005以蓝牙耳机的长柄框架边缘为标准(如图中的实线)。在获取的佩戴蓝牙耳机的图像 中,以预设标准线1005可以通过旋转与实际标准线1004(如图中的虚线)重合,旋转的角度就是蓝牙耳机的设备姿态角1003,即头部穿戴设备的第一设备姿态参数。
以第一电子设备为具有图像采集器件(比如摄像头)的手机,上述步骤502可以通过以下方式实现:手机从头部穿戴设备处获取用户佩戴头部穿戴设备时,由该头部穿戴设备采集到的该头部穿戴设备的第一设备姿态参数。
作为一种示例,手机从头部穿戴设备处获取用户佩戴头部穿戴设备时,该头部穿戴设备的第一设备姿态参数,可以通过以下方式实现:手机触发头部穿戴设备向手机上报用户佩戴头部穿戴设备时,该头部穿戴设备的第一设备姿态参数。
举例说明,手机中运行有如图11所示的界面,在需要校正头部姿态的情况下,用户可以点击如图11所示的“蓝牙设备姿态”控件1101,在“蓝牙设备姿态”控件1101被触发的情况下,手机通过与头部穿戴设备之间的无线通信连接向头部穿戴设备发送查询第一设备姿态参数的指令,响应于该查询第一设备姿态参数的指令,头部穿戴设备利用自身的传感器采集该头部穿戴设备的第一设备姿态参数,然后向手机上报所采集到的头部穿戴设备的第一设备姿态参数。
上述以手机触发头部穿戴设备向手机上报该头部穿戴设备的第一设备姿态参数为例,在实际过程中,头部穿戴设备也可以主动向手机上报头部穿戴设备的第一设备姿态参数,比如,头部穿戴设备检测到用户佩戴该头部穿戴设备的情况下,便可以定期或者时时采集该头部穿戴设备的第一设备姿态参数,然后将所采集到的头部穿戴设备的第一设备姿态参数发送给手机。比如,可以理解的是,在用户佩戴头部穿戴设备的情况下,该头部穿戴设备可以按照预设周期或者时时采集第一设备姿态参数。然后头部穿戴设备可以定期或者在手机的触发下或者每采集一次第一设备姿态参数便将第一设备姿态参数反馈给手机,本申请实施例对此不做限定。
在本申请的一个可能的实施例中,手机与头部穿戴设备建立通信连接,且用户佩戴头部穿戴设备的情况下,手机可以周期性从头部穿戴设备处获取头部穿戴设备采集到的第一设备姿态参数。
可选的,以头部穿戴设备中具有测量第一设备姿态参数的传感器(比如IMU)为例,在头部穿戴设备检测到用户佩戴头部穿戴设备的情况下,头部穿戴设备控制该IMU,以得到用户佩戴头部穿戴设备时,该头部穿戴设备的第一设备姿态参数,或者,头部穿戴设备可以通过内部的三轴重力分布计算,或通过三轴重力分布计算和陀螺仪融合计算等确定用户佩戴头部穿戴设备时,该头部穿戴设备的第一设备姿态参数。
为了降低头部穿戴设备的功耗,本申请实施例中头部穿戴设备可以基于手机的触发再向手机发送第一设备姿态参数,或者头部穿戴设备在检测到第一设备姿态参数发生变化的情况下,再向手机发送重新采集到的头部穿戴设备的第一设备姿态参数。本申请实施例对此不做限定。
由于在手机拍摄用户佩戴头部穿戴设备的图像之前,手机无法知道用户到底有没有按照要求把头部摆成中立位。又或者,用户无法确定自己是否摆成标准的中立位。很多有体态问题的用户自我感觉的中立位实际是偏斜的,如果手机拍摄时用户的头部图像不是用户的头部处于中立位时采集到的,这样后续基于该图像计算到的第一头部姿态参数也可能不准确,因此,为了提升计算头部姿态的精度,本申请实施例在手机 拍摄用户佩戴头部穿戴设备的图像之前,本申请实施例提供的方法还可以包括:手机检测用户的头部是否处于中立位,手机摄像头会根据摄像头本身的参考坐标标定一个中立位,在对用户头部进行拍摄时,手机处理器会对用户头部进行跟踪,与标定的中立位进行比对,从而检测用户头部是否处于中立位。在用户的头部没有处于中立位的情况下,手机输出提示信息,该提示信息用于提示用户将头部调整至所述中立位。
作为一种示例,该提示信息可以是文字提示信息,比如,“请将头部图像调整界面上的中立位”,也可以是语音提示信息,本申请实施例对此不做限定。本申请实施例中并不限定输出的提示信息的具体方式。例如可以是语音输出,也可以是振动输出,也可以是指示灯输出,也可以是特定声音输出(如蜂鸣,特定音乐,长鸣等)。当输出形式是语音输出时,本实施并不限定语音输出的具体内容。只要可以起到提醒用户调整头部位置以处于中立位即可。例如语音内容可以包含头部调整幅度,或者设备调整幅度的等等。
在本申请的一个可能的实施例中,本申请实施例提供的方法还包括:在用户的头部穿戴设备没有处于中立位的情况下,手机在手机的界面上显示视觉引导,视觉引导用于引导用户将头部穿戴设备调整至中立位。比如,该视觉引导可以是用户的头部穿戴设备相对于中立位的差值,或者用于指示用户将头部穿戴设备或向哪个方向移动多少的提示信息,本申请实施例对此不做限定。
其中,视觉引导通过应用软件中内置的智能眼镜数据,在显示界面显示标准位置,并通过摄像头对用户佩戴的智能眼镜进行动态追踪,实时显示智能眼镜的位置。
其中,中立位为以第一电子设备的图像采集装置(如手机摄像头)的坐标系为标准,结合头部穿戴设备的预设标准线或者预设的外形轮廓图像的参考系坐标。
如图12中的(a)所示,在用户触发手机获取用户佩戴头部穿戴设备的头部图像之前,用户触发手机显示如图12中的(a)所示的拍摄界面1201,之后用户将手机的摄像头对准佩戴智能眼镜的用户。其中,佩戴智能眼镜的用户可以使用手机前置摄像头进行自拍,也可以让其他用户使用手机后置摄像头进行拍摄。手机摄像头有前置摄像头和后置摄像头,一般用于自拍和正常拍摄,本实施例中,以获取头部图像为目的,因此,对使用的摄像头不做限定。如图12中的(b)所示,当手机对准用户之后,该拍摄界面上可以显示如图12中的(b)所示的线条1202,该线条1202用于判断用户佩戴的头部穿戴设备是否处于指定位置(即中立位)。可选的,该图中还可以显示线条1203,该线条1203为用户当前的头部穿戴设备所处的实际位置。这样用户便可以通过对比线条1202和线条1203,确定用户当前的头部穿戴设备是否处于中立位。
具体的,在用户的头部穿戴设备位置未处于中立位时,该手机可以输出语音提示消息,比如,请将头部穿戴设备移动,以保证位于中立位,这时用户可以选择调整用户的头部穿戴设备以使得用户的头部穿戴设备处于中立位,如图12中的(d)所示。
可选的,上述图12中的(b)所示的界面中除了显示用于反映中立位的线条1202外,由于在用户的头部佩戴的智能眼镜未处于中立位的情况下,无论是调整手机的拍摄位置还是让被拍摄的用户调整智能眼镜的位置,都是为了让用户尽量的处于中立位,而用户调整智能眼镜或者手机在移动过程中并非一次就可以移动到中立位,因此,手机还可以实时获取用户的智能眼镜的位置和中立位之间的差值,以在界面上实时标记 设备姿态角与中立位之差,如图12中的(c)所示,从而引导用户尽量摆正中立位。
具体的,在图12中的(d),在用户的头部位置处于中立位的情况下,用户可以点击如图12中的(d)所示的“拍摄控件”1204以触发手机拍摄用户处于用户的头部位置处于中立位时,该用户的头部图像。或者,在手机检测到用户的头部位置处于中立位的情况下,手机可以自动触发拍摄指令,以拍摄该用户的头部图像。
可选的,如图13中的(a)所示,在用户触发手机获取用户佩戴头部穿戴设备的头部图像之前,用户触发手机显示如图13中的(a)所示的拍摄界面1301,如图13中的(a)所示,该拍摄界面1301中显示有用于表示中立位的线条1303,可选的,该拍摄界面1301中还可以显示有提示用户在图像采集期间将头部保持在中立位的提示信息1302。如图13中的(b)所示,在手机拍摄头部图像期间,如果检测到用户的头部的位置偏离线条1303,则可以在拍摄界面上显示用于反映距离差的提示信息,以辅助拍摄者尽快提醒被拍摄者调整头部位置,以使得用户的头部位置处于中立位,如图13中的(c)所示。
在本申请的一个可能的实施例中,在由手机拍摄用户佩戴头部穿戴设备时的图像,以用于计算用户的头部姿态时,由于手机等设备拍摄头部图像时,由于设备本身的角度变化(例如倾斜等)导致的第一头部姿态参数计算不准问题。因此,本申请实施例中在手机利用拍摄到的图像确定头部姿态的情况下,手机可以先利用拍摄到的图像计算第一头部姿态参数,然后手机获取手机拍摄该图像时,该手机的设备姿态,然后手机利用手机的第一设备姿态参数校正第一头部姿态参数,便可以得到用户佩戴头部穿戴设备时的目标头部姿态。具体的,以手机基于拍摄到的图像计算的第一头部姿态参数为Ah’,手机的设备姿态参数为Ap为例,则手机校正后的头部姿态参数为:Ah=Ah’-Ap。
作为一种示例,该手机中具有IMU传感器,在手机中的IMU传感器可以实时采集该手机的第一设备姿态参数,然后将该第一设备姿态参数上传给手机,或者手机在检测到需要校正用户头部姿态的场景中,触发手机中的IMU传感器检测该手机的设备姿态参数,本申请实施例对此不做限定。
在本申请的一个可能的实施例中,手机利用摄像头采集用户的头部图像时,可以从不同角度采集用户处于同一姿态时的多张图像,比如,以用户佩戴智能眼镜为例,拍摄者可以利用手机拍摄用户佩戴智能眼镜时的正面图像,每个侧面的图像,这样手机便可以利用才采集到的正面图像,每个侧面的图像分别计算每张图像中所反映的用户的头部姿态,然后根据每张图像计算到的用户的头部姿态,得到最终的用户的第一头部姿态参数。或者,手机利用每张图像计算到每张图像中所反映的智能眼镜的设备姿态,以得到最终的智能眼镜的第一设备姿态参数。
值得说明的是,以用户佩戴智能眼镜为例,用户佩戴智能眼镜的正面图像,即显示智能眼镜的镜框的图像,这种情况下确定的设备姿态为第一设备姿态参数。用户佩戴智能眼镜的侧面图像,即显示智能眼镜的镜腿的图像,这种情况下确定的设备姿态为另一个第一设备姿态参数。其中,两个第一设备姿态参数都可以单独作为智能眼镜的设备姿态使用。
在本申请的一个可能的实施例中,第二电子设备包括第一部件和第二部件,获取 第二电子设备的第一设备姿态参数,包括:获取第一部件的设备姿态参数,以及第二部件的设备姿态参数。根据第一部件的设备姿态参数和第二部件的设备姿态参数,确定第二电子设备的第一设备姿态参数。
在本申请的一个可能的实施例中,获取第一部件的设备姿态参数,以及第二部件的设备姿态参数,包括:获取第二图像和第三图像,第二图像为用户佩戴第一部件时的头部图像,第三图像为用户佩戴第二部件时的头部图像。根据第二图像,确定第一部件的设备姿态参数。根据第三图像,确定第二部件的设备姿态参数。
其中,以第二电子设备为头部穿戴设备为例。头部穿戴设备为蓝牙耳机时,第一部件是左耳机,第二部件则是右耳机;第二图像为用户佩戴左耳机的左侧头部图像,第三图像为用户佩戴右耳机的右侧头部图像。头部穿戴设备为智能眼镜时,第一部件是左镜腿,第二部件则是右镜腿;第二图像为用户佩戴智能眼镜的左侧头部图像,第三图像为用户佩戴智能眼镜的右侧头部图像。
在本申请的一个可能的实施例中,头部穿戴设备通常包括第一部件和第二部件,为了准确测量头部穿戴设备的第一设备姿态参数,在第一部件和第二部件佩戴于头部的不同位置的情况下,比如第一部件佩戴于用户的左耳,第二部件佩戴于用户的右耳,分别计算第一部件的设备姿态参数和第二部件的设备姿态参数,将第一部件的设备姿态参数和第二部件的设备姿态参数以预设算法计算整个头部穿戴设备的第一设备姿态参数。
举例说明,以头部穿戴设备为智能眼镜为例,比如,智能眼镜的第一部件为左镜腿,智能眼镜的第二部件为右镜腿,预设算法为平均值计算。拍摄者拍摄用户侧面的图像,利用左侧面的图像计算得出左镜腿的设备的姿态角为20°,利用右侧面的图像计算得出右镜腿的设备的姿态角为10°,则最终两侧的计算结果由手机处理器进行平均值计算:(20°+10°)/2=15°,因此,最终智能眼镜的设备姿态角为15°。
在本实施例的一种可能的实现方式中,用户可以选择是否接受计算的两侧设备姿态参数,在界面中会出现选择对话框1401,如图14中的(a)所示,如果用户选择“否”,手机不会进行下一步的平均值计算,而是发出提示信息1402,用于提示用户对设备进行调节,如图14中的(b),用户可以对眼镜腿进行调节后再进行图像拍摄;如果用户选择“是”,手机会进行下一步的平均值计算,得出最终的设备姿态。
在本申请的一个可能的实施例中,头部穿戴设备通常包括第一部件和第二部件,第一部件和第二部件中分别设置有IMU,在第一部件和第二部件佩戴于头部的不同位置的情况下,比如第一部件佩戴于用户的左耳,第二部件佩戴于用户的右耳,头部穿戴设备可通过第一部件和第二部件中的IMU获取第一部件和第二部件的设备姿态参数。
举例说明,以头部穿戴设备为智能眼镜为例,智能眼镜的第一部件为左镜腿,智能眼镜的第二部件为右镜腿,在左镜腿和右镜腿中分别设置有IMU,且智能眼镜与电子设备连接通信。用户使用电子设备,比如手机,对用户头部进行拍照,IMU分别获取左镜腿和右镜腿的设备姿态参数,传输至手机,手机对拍摄的头部图像进行第一头部姿态参数计算,结合两个设备姿态参数进行第一头部姿态参数校正。
在本申请的一个可能的实施例中,第一电子设备可以对头部穿戴设备中的第一部 件和第二部件中的IMU进行选择,头部穿戴设备可以根据第一电子设备的指示,以第一电子设备所指示的特定IMU测量头部穿戴设备的设备姿态。
举例说明,以头部穿戴设备为蓝牙耳机为例,蓝牙耳机的第一部件为左耳机,蓝牙耳机的第二部件为右耳机,在左耳机和右耳机中分别设置有IMU。当IMU获取到第一部件和第二部件的设备姿态传输至电子设备时,第一电子设备会有指示信息,用于让用户选择第一部件,或第二部件,或第一部件和第二部件获取的设备姿态参数进行第一头部姿态参数校正。以第一电子设备为手机为例,当蓝牙耳机的左耳机和右耳机中的IMU均传输获取的设备姿态参数至手机时,手机会显示如图15所示的界面,用户可以选择左耳机的IMU的数据,即可以触发控件1501;也可以选择右耳机的IMU的数据,触发控件1502;还可以同时选择左耳机IMU和右耳机IMU的数据,即同时打开控件1501和控件1502。如果选择左耳机和右耳机的数据,手机会通过预设算法对两个数据进行处理,得到一个第一设备姿态参数的数据。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/计算机设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/计算机设备实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。
最后应说明的是:以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种确定头部姿态的方法,其特征在于,应用于第一电子设备中,所述方法包括:
    获取用户的第一头部姿态参数;
    在获取所述第一头部姿态参数的过程中,获取目标电子设备的第一设备姿态参数,所述目标电子设备为第二电子设备或所述第一电子设备;
    根据所述第一头部姿态参数和所述第一设备姿态参数,得到目标头部姿态参数,所述目标头部姿态参数为校正后所述用户的头部姿态参数。
  2. 根据权利要求1所述的方法,其特征在于,所述获取用户的第一头部姿态参数包括:
    获取所述用户的头部图像;
    根据所述用户的头部图像,得到所述用户的第一头部姿态参数。
  3. 根据权利要求2所述的方法,其特征在于,所述目标电子设备为所述第二电子设备,所述用户的头部图像由第一电子设备采集,所述第一电子设备还包括第一传感器,所述方法还包括:
    通过所述第一传感器获取所述第一电子设备在第一时间段内的第二设备姿态参数,所述第一时间段为所述第一电子设备采集所述用户的头部图像的时间段;
    所述根据所述用户的头部图像,得到所述用户的第一头部姿态参数,包括:
    根据所述用户的头部图像,得到初始头部姿态参数;
    根据所述初始头部姿态参数和所述第二设备姿态参数,得到所述第一头部姿态参数。
  4. 根据权利要求2所述的方法,其特征在于,所述获取所述用户的头部图像,包括:
    在满足检测头部姿态参数的触发条件的情况下,触发第三电子设备采集所述用户的头部图像,以及从所述第三电子设备处获取由所述第三电子设备采集到的所述用户的头部图像。
  5. 根据权利要求1~4任一项所述的方法,其特征在于,所述第二电子设备为头部穿戴设备,所述获取所述第二电子设备的第一设备姿态参数,包括:
    获取所述用户的第一图像,所述第一图像为所述用户佩戴所述头部穿戴设备时的头部图像;
    根据所述第一图像,确定所述第二电子设备的第一设备姿态参数。
  6. 根据权利要求1~4任一项所述的方法,其特征在于,所述第二电子设备为头部穿戴设备,所述第二电子设备中具有第二传感器,所述第二传感器用于采集所述第二电子设备的所述第一设备姿态参数,所述获取所述第二电子设备的第一设备姿态参数,包括:
    接收来自所述第二电子设备的所述第一设备姿态参数。
  7. 根据权利要求6所述的方法,其特征在于,在所述接收来自所述第二电子设备的所述第一设备姿态参数之前,所述方法还包括:
    触发所述第二电子设备采集所述第二电子设备的所述第一设备姿态参数。
  8. 根据权利要求1~4任一项所述的方法,其特征在于,所述目标电子设备为所述第二电子设备,所述第二电子设备包括第一部件和第二部件,所述获取所述第二电子设备的第一设备姿态参数,包括:
    获取所述第一部件的设备姿态参数,以及所述第二部件的设备姿态参数;
    根据所述第一部件的设备姿态参数和所述第二部件的设备姿态参数,确定所述第二电子设备的第一设备姿态参数。
  9. 根据权利要求8所述的方法,其特征在于,所述获取所述第一部件的设备姿态参数,以及所述第二部件的设备姿态参数,包括:
    获取第二图像和第三图像,所述第二图像为所述用户佩戴所述第一部件时的头部图像,所述第三图像为所述用户佩戴所述第二部件时的头部图像;
    根据所述第二图像,确定所述第一部件的设备姿态参数;
    根据所述第三图像,确定所述第二部件的设备姿态参数。
  10. 根据权利要求9所述的方法,其特征在于,所述获取第二图像和第三图像之前,所述方法还包括:
    在所述第一电子设备的显示屏上显示第一控件和第二控件中的至少一个,所述第一控件用于提示采集所述第二图像,所述第二控件用于提示采集所述第三图像。
  11. 根据权利要求8或9所述的方法,其特征在于,所述第一部件和所述第二部件中分别具有第三传感器,所述获取所述第一部件的设备姿态参数,以及所述第二部件的设备姿态参数,包括:
    从所述第二电子设备处获取所述第一部件的第三传感器采集的所述第一部件的设备姿态参数;
    从所述第二电子设备处获取所述第二部件的第三传感器采集的所述第二部件的设备姿态参数。
  12. 根据权利要求1~11任一项所述的方法,其特征在于,所述获取用户的第一头部姿态参数之前,所述方法还包括:
    发出第一提示信息,所述第一提示信息用于判断所述用户的头部是否处于标准位置。
  13. 根据权利要求12所述的方法,其特征在于,所述第一电子设备具有显示屏,所述第一提示信息显示在所述显示屏上,所述方法还包括:
    在所述显示屏上显示所述用户当前的头部位置与所述标准位置之间的距离。
  14. 一种电子设备,其特征在于,包括处理器,所述处理器与存储器耦合,所述处理器用于执行所述存储器中存储的计算机程序或指令,以使得所述电子设备实现如权利要求1~13中任一项所述的方法。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序在电子设备上运行时,使得所述电子设备执行如权利要求1~13中任一项所述的方法。
PCT/CN2023/090134 2022-04-29 2023-04-23 确定头部姿态的方法以及装置 WO2023207862A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210476012.6A CN117008711A (zh) 2022-04-29 2022-04-29 确定头部姿态的方法以及装置
CN202210476012.6 2022-04-29

Publications (1)

Publication Number Publication Date
WO2023207862A1 true WO2023207862A1 (zh) 2023-11-02

Family

ID=88517750

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/090134 WO2023207862A1 (zh) 2022-04-29 2023-04-23 确定头部姿态的方法以及装置

Country Status (2)

Country Link
CN (1) CN117008711A (zh)
WO (1) WO2023207862A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103955272A (zh) * 2014-04-16 2014-07-30 北京尚德智产投资管理有限公司 一种终端设备用户姿态检测系统
US20150109200A1 (en) * 2013-10-21 2015-04-23 Samsung Electronics Co., Ltd. Identifying gestures corresponding to functions
CN111723624A (zh) * 2019-03-22 2020-09-29 京东方科技集团股份有限公司 一种头部运动跟踪方法和系统
CN111768600A (zh) * 2020-06-29 2020-10-13 歌尔科技有限公司 一种低头检测方法、装置及无线耳机
CN112527094A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 一种人体姿态检测方法及电子设备
CN113223129A (zh) * 2020-01-20 2021-08-06 华为技术有限公司 一种图像渲染方法、电子设备及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150109200A1 (en) * 2013-10-21 2015-04-23 Samsung Electronics Co., Ltd. Identifying gestures corresponding to functions
CN103955272A (zh) * 2014-04-16 2014-07-30 北京尚德智产投资管理有限公司 一种终端设备用户姿态检测系统
CN111723624A (zh) * 2019-03-22 2020-09-29 京东方科技集团股份有限公司 一种头部运动跟踪方法和系统
CN112527094A (zh) * 2019-09-18 2021-03-19 华为技术有限公司 一种人体姿态检测方法及电子设备
CN113223129A (zh) * 2020-01-20 2021-08-06 华为技术有限公司 一种图像渲染方法、电子设备及系统
CN111768600A (zh) * 2020-06-29 2020-10-13 歌尔科技有限公司 一种低头检测方法、装置及无线耳机

Also Published As

Publication number Publication date
CN117008711A (zh) 2023-11-07

Similar Documents

Publication Publication Date Title
WO2020211701A1 (zh) 模型训练方法、情绪识别方法及相关装置和设备
WO2020168965A1 (zh) 一种具有折叠屏的电子设备的控制方法及电子设备
CN110456938B (zh) 一种曲面屏的防误触方法及电子设备
EP4020491A1 (en) Fitness-assisted method and electronic apparatus
WO2020029306A1 (zh) 一种图像拍摄方法及电子设备
WO2021082564A1 (zh) 一种操作提示的方法和电子设备
CN114365482A (zh) 一种基于Dual Camera+TOF的大光圈虚化方法
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
CN110138999B (zh) 一种用于移动终端的证件扫描方法及装置
WO2022007720A1 (zh) 可穿戴设备的佩戴检测方法、装置及电子设备
CN114257920B (zh) 一种音频播放方法、系统和电子设备
CN114090102A (zh) 启动应用程序的方法、装置、电子设备和介质
WO2022105702A1 (zh) 保存图像的方法及电子设备
CN115589051A (zh) 充电方法和终端设备
WO2022017270A1 (zh) 外表分析的方法和电子设备
WO2022078116A1 (zh) 笔刷效果图生成方法、图像编辑方法、设备和存储介质
WO2023207862A1 (zh) 确定头部姿态的方法以及装置
CN115032640A (zh) 手势识别方法和终端设备
CN114639114A (zh) 视力检测方法及电子设备
CN115393676A (zh) 手势控制优化方法、装置、终端和存储介质
CN114812381A (zh) 电子设备的定位方法及电子设备
CN113970965A (zh) 消息显示方法和电子设备
CN113472996B (zh) 图片传输方法及装置
WO2022222702A1 (zh) 屏幕解锁方法和电子设备
WO2022206783A1 (zh) 拍摄方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23795295

Country of ref document: EP

Kind code of ref document: A1