WO2021151320A1 - 一种握持姿态检测方法及电子设备 - Google Patents

一种握持姿态检测方法及电子设备 Download PDF

Info

Publication number
WO2021151320A1
WO2021151320A1 PCT/CN2020/122954 CN2020122954W WO2021151320A1 WO 2021151320 A1 WO2021151320 A1 WO 2021151320A1 CN 2020122954 W CN2020122954 W CN 2020122954W WO 2021151320 A1 WO2021151320 A1 WO 2021151320A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
sensor
state sequence
state
holding posture
Prior art date
Application number
PCT/CN2020/122954
Other languages
English (en)
French (fr)
Inventor
刘海波
胡燕
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021151320A1 publication Critical patent/WO2021151320A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Definitions

  • This application relates to the field of terminal technology, and in particular to a method for detecting a holding posture and an electronic device.
  • Capacitive touch screens have the characteristics of high sensitivity and fast response speed, and are widely used in various fields, especially in the field of electronic devices (such as smart phones), which brings a good user experience to users.
  • the current smart phone can simply identify whether the user is holding the left hand or the right hand, and holding it horizontally or vertically, it cannot accurately identify the specific position of the user holding the terminal and the continuous change of the holding posture in real time. Therefore, it is impossible to accurately recognize the user's operation intention of the terminal, and it is inconvenient for the electronic device to provide more refined services.
  • the present application provides a method and electronic device for detecting a holding posture, which are used to accurately recognize a user's holding posture on a terminal, so as to provide more refined services based on the posture and improve user experience.
  • an embodiment of the present application provides a method for detecting a holding posture, which can be applied to an electronic device, and the method includes: the electronic device acquires characteristic information of M sensor units of the electronic device at N sampling moments.
  • the characteristic information may include the identification of the sensor, the data of the sensor, and so on.
  • the electronic device can determine the N state sequences corresponding to the M sensor units at the N sampling moments according to the characteristic information.
  • the electronic device matches the N state sequences with the K reference state sequences in the preset reference state sequence set, determines the first reference state sequence with the greatest similarity, and then determines the reference grip corresponding to the first reference state sequence.
  • the holding posture is the holding posture of the electronic device.
  • the above method can be used to more accurately recognize the user's holding posture of the terminal, so as to provide more refined services based on the posture and improve the user experience.
  • the M sensor units on the electronic device can be pre-divided into L sensor groups, and the electronic device can determine the sensors in each sensor group according to the identifiers of the sensor units in the acquired characteristic information.
  • the sensor data of the unit For any sampling time of the N sampling times, the electronic device compares the sensor data of the sensor units in the L sensor groups with a preset threshold value, and determines the status of the L sensor groups according to the comparison results; finally, it is generated at the sampling time
  • the state sequence corresponding to the M sensor units, and the state sequence includes the states of the L sensor groups.
  • the first sensor group is any one of the L sensor groups: when there is a sensor with a detection value in the first sensor group
  • the proportion of the unit is greater than the first threshold, it is determined that the state of the first sensor group is the valid state, otherwise it is the invalid state.
  • the proportion is the ratio between the total number U of sensor units with a detection value and the total number V of sensor units in the first sensor group; and/or, when the detection value of the sensor units in the first sensor group is greater than
  • At the second threshold it is determined that the state of the first sensor group is the valid state, otherwise it is the invalid state.
  • the electronic device according to the above method is beneficial to improve the accuracy of the result of the state of the sensor group.
  • the electronic device for any one of the N state sequences calculates K similarities between the state sequence and K reference state sequences, and Determine the second reference state sequence with the largest similarity from K similarities; determine the second reference state sequence with the largest occurrence probability from among the N second reference state sequences corresponding to the N state sequences as the first reference state sequence.
  • the electronic device may determine the similarity between the state corresponding to the key sensor group in the N state sequences and the state corresponding to the key sensor group in the K reference state sequences to determine the difference between the N state sequences and The similarity between K reference state sequences.
  • the electronic device compares the similarity, which is beneficial to improve the accuracy of the recognition result of the terminal posture.
  • the electronic device may update the reference state sequence in the reference state sequence set .
  • One of the possible update methods is: from the N state sequences, the electronic device determines the first state sequence with the most similarity to the first reference state sequence that is greater than the third threshold and has the most occurrences; The first reference state sequence in the reference state sequence set is replaced with the first state sequence.
  • the electronic device performs self-learning iteration and update on the preset reference state sequence set, which is beneficial to improve the accuracy of the terminal gesture recognition result.
  • the electronic device determines, from the N state sequences, the second state sequence with the most similarity between the first reference state sequence and the first reference state sequence being less than a fourth threshold; according to the The second state sequence determines a first holding posture corresponding to the second state sequence; adding the second state sequence corresponding to the first holding posture to the preset reference state sequence set.
  • the electronic device performs self-learning iteration and update on the preset reference state sequence set, which is beneficial to improve the accuracy of the terminal gesture recognition result.
  • the electronic device calculates the occurrence probability of the reference holding posture corresponding to each reference state sequence in the reference state sequence set within a set time period; according to the probability, from the preset reference The reference state sequence corresponding to the reference holding posture whose probability is less than the fifth threshold is deleted from the state sequence set.
  • the electronic device performs self-learning iteration and update on the preset reference state sequence set, which is beneficial to improve the accuracy of the terminal gesture recognition result.
  • the electronic device determines the holding posture in the third set period of time.
  • the multiple holding postures of the electronic device determine the operation intention of the user; according to the operation intention, the system resources of the electronic device are configured or the display interface of the electronic device is controlled.
  • the electronic device predicts the user's operation intention based on the recognition result of the holding posture, which is conducive to optimizing resource allocation, improving the utilization rate of system resources, and improving the intelligence of the electronic device.
  • the electronic device determines that the holding posture of the electronic device is a predetermined When it is set as a bad holding posture, a prompt message is output, and the prompt information is used to remind the user to correct the holding posture.
  • the electronic device can remind the user when it is determined that the user's holding posture is a bad holding posture, which is beneficial to improve the user experience.
  • an embodiment of the present application provides a display method, which is applied to an electronic device provided with a sensor unit, and the method includes: the electronic device determines the first time when the electronic device is held by the user at the first moment. A holding posture, and according to the first holding posture, controlling the display screen of the electronic device to display the first interface of the application.
  • the electronic device determines the second holding posture when the electronic device is held by the user at the second moment, and according to the second holding posture, controls the display screen of the electronic device to display the second interface of the application ;
  • the first holding posture is different from the second holding posture
  • the second interface is different from the first interface.
  • the electronic device can be controlled to display different interface content under different holding postures of the user, thereby providing the degree of intelligence of the device and improving the user experience.
  • the electronic device may determine the first holding posture and the second holding posture according to the method provided in the first aspect, and details are not repeated here.
  • an embodiment of the present application provides an electronic device, including a sensor, a touch screen, a processor, and a memory, where the memory is used to store one or more computer programs; when one or more computer programs stored in the memory are used by the processor When executed, the electronic device can implement any possible design method in any of the foregoing aspects.
  • an embodiment of the present application further provides a device, which includes a module/unit that executes any one of the possible design methods in any of the foregoing aspects.
  • modules/units can be realized by hardware, or by hardware executing corresponding software.
  • an embodiment of the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium includes a computer program.
  • the computer program runs on an electronic device, the electronic device executes any of the above aspects. Any one of the possible design methods.
  • the embodiments of the present application also provide a method that includes a computer program product, which when the computer program product runs on a terminal, causes the electronic device to execute any one of the possible designs in any of the above-mentioned aspects.
  • an embodiment of the present application further provides a chip, which is coupled with a memory, and is configured to execute a computer program stored in the memory to execute any possible design method of any one of the foregoing aspects.
  • FIG. 1 is a schematic structural diagram of a mobile phone provided by an embodiment of the application
  • FIG. 2 is a schematic structural diagram of an Android operating system provided by an embodiment of the application.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the application.
  • 4A is a schematic diagram of a sensor deployment structure of an electronic device according to an embodiment of the application.
  • 4B is a schematic diagram of a sensor encoding method provided by an embodiment of the application.
  • Figure 5 is a schematic structural diagram of an applicable scenario provided by an embodiment of the application.
  • FIG. 6 is a schematic flowchart of a method for constructing a gripping posture pattern set provided by an embodiment of this application;
  • FIGS. 7A and 7B are schematic diagrams of a holding manner provided by an embodiment of the application.
  • FIG. 8 is a schematic flowchart of a method for detecting a holding posture according to an embodiment of the application.
  • FIGS. 9A to 9C are schematic diagrams of a holding scene provided by an embodiment of the application.
  • FIG. 10 is a schematic diagram of coordinate system switching provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a group of mobile phone interfaces according to an embodiment of the application.
  • FIG. 12 is a schematic diagram of a vehicle interface provided by an embodiment of the application.
  • FIG. 13 is a schematic diagram of a terminal structure according to an embodiment of the application.
  • the electronic device can determine the user's touch area by acquiring the current sensor data of the touch sensor of the electronic device, and determine the current holding of the electronic device according to the current horizontal or vertical holding state and the touch area of the electronic device
  • this method cannot accurately recognize the holding posture according to the above method.
  • the embodiments of the present application provide a method for detecting a holding posture and an electronic device. The method can realize the processing of sensor data of the sensor into a state sequence, and the state sequence and the preset reference state sequence set K in the set Refer to the state sequence for matching, so as to accurately determine the holding posture.
  • the electronic device may be a portable terminal containing functions such as a personal digital assistant and/or a music player, such as a mobile phone, a tablet computer, a wearable device with wireless communication function (such as a smart watch), a vehicle-mounted device, etc. .
  • portable terminals include, but are not limited to, carrying Or portable terminals with other operating systems.
  • the above-mentioned portable terminal may also be, for example, a laptop computer (Laptop) having a touch-sensitive surface (such as a touch panel) or the like. It should also be understood that, in some other embodiments, the aforementioned terminal may also be a desktop computer with a touch-sensitive surface (such as a touch panel).
  • FIG. 1 shows a schematic structural diagram of the mobile phone 100.
  • the mobile phone 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2, mobile communication module 151, wireless communication module 152, Audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, SIM card interface 195 and so on.
  • a processor 110 an external memory interface 120, an internal memory 121, a USB interface 130, a charging management module 140, a power management module 141, a battery 142, antenna 1, antenna 2, mobile communication module 151, wireless communication module 152, Audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone interface 170D, sensor module 180, buttons 190, motor 191, indicator 192, camera 193, display screen 194, SIM card interface 195 and so on.
  • the sensor module 180 may include a gyroscope sensor 180A, an acceleration sensor 180B, a proximity light sensor 180G, a fingerprint sensor 180H, and a touch sensor 180K (Of course, the mobile phone 100 may also include other sensors, such as temperature sensors, pressure sensors, distance sensors, and magnetic sensors. , Ambient light sensor, air pressure sensor, bone conduction sensor, etc., not shown in the figure).
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 100.
  • the mobile phone 100 may include more or fewer components than those shown in the figure, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (Neural-network Processing Unit, NPU) Wait.
  • AP application processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • NPU neural network Processing Unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the mobile phone 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may run the holding posture detection method provided by the embodiments of the present application to accurately recognize the user's terminal holding posture, so that the terminal can provide more refined services based on the posture and improve the user experience.
  • the processor 110 may include different devices. For example, when a CPU and GPU are integrated, the CPU and GPU may cooperate to execute the gripping posture detection method provided in the embodiment of the present application. For example, some algorithms in the gripping posture detection method are executed by the CPU, and the other Executed by GPU to get faster processing efficiency.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display panel can use liquid crystal display (LCD), organic light-emitting diode (OLED), active matrix organic light-emitting diode or active-matrix organic light-emitting diode (active-matrix organic light-emitting diode).
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • active-matrix organic light-emitting diode active-matrix organic light-emitting diode
  • AMOLED flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light-emitting diode (QLED), etc.
  • the mobile phone 100 may include one or N display screens 194, and N is a positive integer greater than one.
  • the touch sensor and/or pressure sensor on the display screen 194 can collect the user's touch operation, and the touch sensor and/or pressure sensor can transmit the detected sensor data to the processor 110 so that the processor 110 can determine The corresponding state of the sensor unit.
  • the display screen 194 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the processor 110 may control the display interface on the display screen 194 based on the holding posture of the terminal.
  • the camera 193 (a front camera or a rear camera, or a camera can be used as a front camera or a rear camera) is used to capture still images or videos.
  • the camera 193 may include photosensitive elements such as a lens group and an image sensor, where the lens group includes a plurality of lenses (convex lens or concave lens) for collecting light signals reflected by the object to be photographed and transmitting the collected light signals to the image sensor .
  • the image sensor generates an original image of the object to be photographed according to the light signal.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the mobile phone 100 by running instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area can store operating system, application program (such as camera application, WeChat application, etc.) codes and so on.
  • the data storage area can store data created during the use of the mobile phone 100 (such as data collected by sensors, and preset reference state sequence sets).
  • the internal memory 121 may also store the code of the terminal holding posture detection algorithm provided in the embodiment of the present application.
  • the processor 110 may control the display interface on the display screen 194.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the code of the terminal holding posture detection algorithm provided by the embodiment of the present application can also be stored in an external memory.
  • the processor 110 can run the terminal holding posture detection algorithm code stored in the external memory through the external memory interface 120, and the processor 110 determines the holding posture of the electronic device, and then controls the display screen according to the holding posture.
  • the function of the sensor module 180 is described below.
  • the gyroscope sensor 180A can be used to determine the movement posture of the mobile phone 100.
  • the angular velocity of the mobile phone 100 around three axes ie, x, y, and z axes
  • the gyroscope sensor 180A can be used to detect the current motion state of the mobile phone 100, such as shaking or static, such as horizontal or vertical screen.
  • the gyroscope sensor 180A can be used to detect folding or unfolding operations on the display screen 194.
  • the gyroscope sensor 180A may report the detected folding operation or unfolding operation as an event to the processor 110 to determine the folding state or unfolding state of the display screen 194.
  • the acceleration sensor 180B can detect the magnitude of the acceleration of the mobile phone 100 in various directions (generally three axes). When the display screen in the embodiment of the present application is a foldable screen, the acceleration sensor 180B can be used to detect folding or unfolding operations on the display screen 194. The acceleration sensor 180B may report the detected folding operation or unfolding operation as an event to the processor 110 to determine the folding state or unfolding state of the display screen 194.
  • the pressure sensor 180C is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180C may be provided on the display screen 194 or the housing part.
  • the capacitive pressure sensor may include at least two parallel plates with conductive materials. When a force is applied to the pressure sensor 180C, the capacitance between the electrodes changes. The mobile phone 100 determines the intensity of the pressure according to the change of the capacitance. When a touch operation acts on the display screen 194, the mobile phone 100 detects the intensity of the touch operation according to the pressure sensor 180C.
  • the mobile phone 100 may also calculate the touched position based on the detection signal of the pressure sensor 180C.
  • touch operations that act on the same touch position but have different touch operation strengths may correspond to different operation instructions. For example: when a touch operation with a touch operation intensity greater than the first pressure threshold acts on both sides of the housing, an instruction to view unread messages is executed.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the mobile phone emits infrared light through light-emitting diodes.
  • Mobile phones use photodiodes to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it can be determined that there is an object near the phone. When insufficient reflected light is detected, the mobile phone can determine that there is no object near the mobile phone.
  • the proximity light sensor 180G can be arranged on the upper side of the screen 194, and the proximity light sensor 180G can detect whether a human face is close to the screen according to the optical path difference of the infrared signal.
  • the proximity light sensor 180G can be arranged on the first screen of the foldable display screen 194, and the proximity light sensor 180G can detect the first screen according to the optical path difference of the infrared signal. The size of the folding or unfolding angle between the screen and the second screen.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the mobile phone 100 can use the collected fingerprint characteristics to implement fingerprint unlocking, access application locks, fingerprint photographs, fingerprint answering calls, and so on.
  • Touch sensor 180K also called “touch panel”.
  • the touch sensor 180K may be disposed on the display screen 194 or the housing part, and the touch screen is composed of the touch sensor 180K and the display screen 194, which is also called a “touch screen”.
  • the touch sensor 180K is used to detect touch operations acting on or near it.
  • the touch sensor can transmit the detected sensor data to the processor 110, so that the processor 110 determines the state of the sensor unit according to the sensor data, and then determines the state sequence corresponding to the sensor unit of the electronic device.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the mobile phone 100, which is different from the position of the display screen 194.
  • the display screen 194 of the mobile phone 100 displays a main interface, and the main interface includes icons of multiple applications (such as a camera application, a WeChat application, etc.).
  • the display screen 194 displays an interface of the camera application, such as a viewfinder interface.
  • the wireless communication function of the mobile phone 100 can be realized by the antenna 1, the antenna 1, the mobile communication module 151, the wireless communication module 152, the modem processor, and the baseband processor.
  • the antenna 1 and the antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the mobile phone 100 can be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna can be used in combination with a tuning switch.
  • the mobile communication module 151 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied on the mobile phone 100.
  • the mobile communication module 151 may include at least one filter, a switch, a power amplifier, a low noise amplifier (LNA), and the like.
  • the mobile communication module 151 can receive electromagnetic waves by the antenna 1, filter and amplify the received electromagnetic waves, and transmit them to the modem processor for demodulation.
  • the mobile communication module 151 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic wave radiation via the antenna 1.
  • at least part of the functional modules of the mobile communication module 151 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 151 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low frequency baseband signal to be sent into a medium and high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the application processor outputs a sound signal through an audio device (not limited to the speaker 170A, the receiver 170B, etc.), or displays an image or video through the display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110 and be provided in the same device as the mobile communication module 151 or other functional modules.
  • the wireless communication module 152 can provide applications on the mobile phone 100 including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) networks), bluetooth (BT), and global navigation satellite systems. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN wireless local area networks
  • BT wireless fidelity
  • GNSS global navigation satellite system
  • frequency modulation frequency modulation, FM
  • NFC near field communication technology
  • infrared technology infrared, IR
  • the wireless communication module 152 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 152 receives electromagnetic waves via the antenna 2, frequency modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110.
  • the wireless communication module 152 can also receive the signal to be sent from the processor 110, perform frequency modulation, amplify it, and convert it into electromagnetic waves
  • the mobile phone 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the mobile phone 100 can receive the key 190 input, and generate key signal input related to the user settings and function control of the mobile phone 100.
  • the mobile phone 100 can use the motor 191 to generate a vibration notification (for example, an incoming call vibration notification).
  • the indicator 192 in the mobile phone 100 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 in the mobile phone 100 is used to connect to the SIM card.
  • the SIM card can be connected to and separated from the mobile phone 100 by inserting into the SIM card interface 195 or pulling out from the SIM card interface 195.
  • the mobile phone 100 may include more or less components than those shown in FIG. 1, which is not limited in the embodiment of the present application.
  • the illustrated mobile phone 100 is only an example, and the mobile phone 100 may have more or fewer parts than shown in the figure, may combine two or more parts, or may have a different part configuration.
  • the various components shown in the figure may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
  • the software system of the electronic device can adopt a layered architecture, an event-driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture.
  • the embodiment of the present invention takes an Android system with a layered architecture as an example to illustrate the software structure of an electronic device.
  • Fig. 2 is a software structure block diagram of an electronic device according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Communication between layers through software interface.
  • the Android system is divided into four layers, from top to bottom, the application layer, the application framework layer, the Android runtime and system library, and the kernel layer.
  • the application layer can include a series of application packages.
  • the application package can include applications such as phone, camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, short message, etc.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer can include a window manager, a content provider, a view system, a phone manager, a resource manager, and a notification manager.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • the content provider is used to store and retrieve data and make these data accessible to applications.
  • the data may include videos, images, audios, phone calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface that includes a short message notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device. For example, the management of the call status (including connecting, hanging up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and it can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify download completion, message reminders, and so on.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or a scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, text messages are prompted in the status bar, prompt sounds, electronic devices vibrate, and indicator lights flash.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function functions that the java language needs to call, and the other part is the core library of Android.
  • the application layer and application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (surface manager), media library (Media Libraries), three-dimensional graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides a combination of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of a variety of commonly used audio and video formats, as well as still image files.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least display driver, camera driver, audio driver, and sensor driver.
  • hardware may refer to various types of sensors, such as acceleration sensors, gyroscope sensors, touch sensors, pressure sensors, etc. involved in the embodiments of the present application.
  • Figures 1 and 2 above are respectively the hardware structure and the software structure of the electronic device to which the embodiment of the application is applicable.
  • the working process of the software and hardware of the electronic device is exemplified.
  • sensors in the hardware layer can collect data.
  • the gyroscope sensor in the hardware layer can detect whether the display screen is in landscape mode
  • the touch sensor in the hardware layer can detect the user's operation on the display area and the device housing, and then the processor 110 can detect whether the display is in a horizontal screen state.
  • the sensor unit acquires sensor data collected at N sampling moments, and uses the sensor data to determine the holding posture of the terminal.
  • the hardware layer of the electronic device detects a user's touch operation, and the touch sensor 180K collects sensor data at the same time.
  • the touch operation triggers a corresponding hardware interrupt.
  • the hardware interrupt is sent to the kernel layer and sent to the system library via the kernel layer.
  • the system library determines the state of the sensor unit based on the sensor data, and then determines the state sequence corresponding to all sensor units of the electronic device.
  • the system library matches the state sequence with the reference state sequence in the preset reference state sequence set, according to The reference holding posture corresponding to the matched first reference state sequence is determined, and the current holding posture corresponding to the touch operation is determined.
  • FIG. 3 shows that the method for detecting the holding posture provided by the embodiment of the application is not limited to be applicable to electronic devices with traditional display screens, and is also applicable to electronic devices with folding screens, various special-shaped screens or full screens.
  • the display screen of the electronic device may be a curved screen as shown in (a) in FIG.
  • the edge 301 of the curved screen has a certain curvature;
  • the display screen of the electronic device may be such as Figure 3 (b) and (c) show a folding screen
  • Figure 3 (b) is the folding screen in a half-folded state
  • Figure 3 (c) is the folding screen in a fully folded state, when When the folding screen is in a half-folded state or a fully folded state
  • the bendable area 303 is the edge display area of the folding screen.
  • both the housing and the display screen of the electronic device may be provided with sensor units, such as a touch sensor 180K, a pressure sensor 180C, and a proximity light sensor 180G.
  • sensor units may be deployed on the front (display screen), the back, and the top, bottom, left, and right sides of the electronic device.
  • all sensor units on the terminal can be encoded in advance.
  • each sensor unit can be encoded with numbers or coordinates to indicate the position of the sensor unit.
  • FIG. 4B shows an encoding method of a sensor unit. Among them, each grid represents a sensor unit, and two-dimensional coordinates are used to encode each sensor unit, and the coordinate value (Xm, Yn) uniquely indicates the position of a sensor unit.
  • the status of the sensor unit when the sensor unit detects a value, the status of the sensor unit can be represented by 1, and when the sensor unit does not detect a value, the status of the sensor unit can be represented by 0.
  • the detection value (value) of the touch sensor when the detection value (value) of the touch sensor is 123, it means that the touch sensor is touched by the user, so the state of the sensor unit can be represented by 1, and when the detection value (value) of the touch sensor is 0, it means the touch The sensor is not touched by the user, so the state of the sensor unit can be represented by 0.
  • all M sensor units on the terminal may also be divided into multiple sensor groups in advance.
  • FIG. 4B all sensor units within a thick black line constitute a sensor group, and in FIG. 4B, 22 sensor groups are schematically shown. It should be noted that those skilled in the art can divide the sensor groups into different numbers or not divide the sensor groups according to the types of electronic devices and actual needs, which is not limited in the embodiment of the present application.
  • the status of a sensor group can be represented by 0 or 1, and the status of a sensor group is determined by the status of all sensor units in the sensor group.
  • the sensor units in the sensor group are touch sensors, when the proportion of sensor units with detection values in the sensor group is greater than the first threshold, it is determined that the state of the sensor group is 1, otherwise it is 0; If the sensor units in the sensor group are touch sensors and pressure sensors, when the proportion of sensor units with detection values in the sensor group is less than the first threshold, but u sensor units in the sensor group have detection values greater than At the second threshold, it is determined that the state of the sensor group is 1, otherwise it is 0, where u is a positive integer. That is to say, although only a small part of the sensors in the sensor group are touched by the user, it is detected that the user's pressure value is large, so it is still determined that the state of the sensor group is 1.
  • the method provided in the embodiment of the present application can also be applied to the vehicle as shown in FIG. 5.
  • the vehicle steering wheel 502 is provided with a data collection module (or data collection device), and the vehicle-mounted device 501 is provided with a data processing module (or data processing device).
  • the vehicle steering wheel is provided with sensor units, and all the sensor units on the vehicle steering wheel can be coded in advance according to the above method.
  • the vehicle-mounted device can obtain the data collected by the sensor unit on the steering wheel of the vehicle, and then determine the user's terminal holding posture.
  • the vehicle steering wheel 502 in FIG. 5 can also be integrated with a data processing module (or data processing equipment) at the same time. That is to say, the data acquisition module and the data processing module can be set in different settings. It can also be set in the same device, which is not limited in this application.
  • the data collection module can also be a medical detection device or a smart wearable device.
  • the user's health data can be obtained in real time, and the user can be predicted in advance. The health status of the patient is changed, and the status warning and treatment suggestions are provided.
  • an embodiment of the present application provides a holding posture detection method.
  • the electronic device can obtain sensor data at N sampling moments, and based on the sensor data, generate a state sequence of the sensor unit corresponding to each sampling moment. For the state sequence at each sampling moment, the electronic device matches the state sequence with the reference state sequence in the preset reference state sequence set, determines the first reference state sequence with the highest similarity, and then compares the first reference state sequence
  • the corresponding reference holding posture is used as the current holding posture of the terminal, so that the terminal can provide more refined services based on the posture and improve the user experience.
  • a holding posture pattern set needs to be constructed first, and the holding posture pattern set includes the reference state sequence and the corresponding relationship between the reference holding posture. That is, the embodiment of the present application provides a method for constructing a gripping posture pattern set. As shown in FIG. 6, the method mainly includes the following steps.
  • Step 601 The electronic device receives n operations corresponding to the first reference holding posture.
  • the mobile phone receives the operation corresponding to the upper side of the terminal holding the terminal with one hand with the left hand, or the mobile phone receives the operation corresponding to the upper and lower sides of the terminal holding the terminal with both hands, and n is a positive integer.
  • step 602 the processor 110 of the electronic device acquires data collected by the sensor unit n times.
  • the processor 110 obtains the pressure value of the pressure sensor, or obtains the touch detection value of the touch sensor, or the like.
  • Step 603 The electronic device determines n state sequences corresponding to the sensor unit.
  • Step 604 The electronic device uses the state sequence with the highest occurrence probability among the n state sequences as the first reference state sequence corresponding to the first reference holding posture, and establishes a correspondence between the first reference holding posture and the first reference state sequence relation.
  • the common holding postures of a certain number of users in the area are counted, and these common holding postures are used as reference holding postures, and then for each holding posture, follow the above method
  • the corresponding reference state sequence is determined, and finally a grip posture pattern set including the correspondence between the reference state sequence and the reference grip posture is generated.
  • the holding posture mode set may be as shown in Table 1. It should be noted that the gripping posture pattern set is only an exemplary description, and in other possible cases, it may not be limited to the form of the gripping posture pattern set in Table 1.
  • Terminal holding mode identification Terminal holding posture Reference state sequence of the sensor unit M1 Hold the top and bottom sides of the terminal with both hands 00000 00000 00000 00000 11 M2 Hold the left and right sides of the terminal with both hands 00001 01011 01100 01010 00 M3 Hold the upper side of the terminal with one hand with your right hand ... M4 Hold the bottom side of the terminal with one hand with your right hand ... M5 Hold the upper side of the terminal with one hand with your left hand ... M6 Hold the bottom side of the terminal with one hand with your left hand ... ... ... ...
  • the state sequence of the sensor unit in Table 1 consists of 22 numbers, where 22 corresponds to the number of sensor groups in Figure 4B, 0 represents the sensor group is not held, and 1 represents the sensor group is held hold.
  • 22 corresponds to the number of sensor groups in Figure 4B
  • 0 represents the sensor group is not held
  • 1 represents the sensor group is held hold.
  • the state sequence composed of 22 sensor groups is ⁇ 00001 01011 01100 01010 00 ⁇ .
  • the gray area indicates the position of the sensor unit and its detection value when the user's hand is holding it.
  • the state sequence composed of 22 sensor groups is ⁇ 00000 00011 01100 00011 01 ⁇ .
  • the electronic device may subdivide each mode of the holding posture mode set according to the specific values of the collected sensor data.
  • the mode M2 may further include Mode M21, Mode M22, Mode M23, etc.
  • the electronic device can further use sensor data to determine the number of contact points, contact area, contact position, detection value and other information , And then the electronic device constructs various sub-modes of mode M2 in terms of the number of contact points, contact area, contact location, and the size of the sensing value.
  • the electronic device can divide the sub-modes M21, M22, and M23 of mode M2 into light-holding, medium-holding, and heavy-holding, respectively, according to one or more sensor data. Holding posture. Specifically, the electronic device can set corresponding sensing value ranges for different sensor types, and different ranges correspond to different sub-modes. For example, for a pressure sensor, if the detection value range of the pressure sensor is 0-30, the detection value range is divided into the following three ranges, namely (0 ⁇ 5), [5 ⁇ 20), [20 ⁇ 30) , Respectively expressed as light grip, moderate grip, and heavy grip.
  • the holding gesture mode set in Table 1 above can also be divided into one-finger touch mode, two-finger touch mode, ... ten-finger touch mode, etc., which will not be repeated here. List one by one.
  • the electron when the electron generates the gripping posture pattern set, it can also identify the key sensor group in the reference state sequence corresponding to each pattern. This is so that after the electronic device determines the first state sequence according to the data collected by the sensor, it preferentially matches the state in the first state sequence to the state corresponding to the key sensor group identifier in the mode, so as to improve the efficiency of matching.
  • the corresponding key sensor group identifiers in different modes are different. In doing so, the main consideration is that after a large amount of data statistics show that for the same user of the terminal, after holding a habit, some sensors are likely to be enabled within a specific holding area. These holding areas can be set.
  • the key point is one or more, the value of the key point can be 0 or 1, 0 means that the position must not be enabled in a specific mode, and 1 means that the position must be disabled in a specific mode. Enable.
  • a set of grip posture patterns including key sensor group identifications is shown in Table 1b.
  • an embodiment of the present application provides a method for detecting a gripping posture, as shown in FIG. 8, which can be implemented in the above-mentioned electronic device.
  • the method includes the following steps.
  • step 801 the processor 110 of the electronic device acquires characteristic information of M sensor units on the electronic device at N sampling moments.
  • the user holds the electronic device, and the sensor unit on the electronic device collects information in real time, and obtains characteristic information of the sensor unit.
  • the characteristic information of the sensor unit may include the data collected by the sensor unit, and the identification of the sensor unit (for example, the sensor unit coding).
  • the sensor data (that is, the detection value) can be one or more, including but not limited to capacitance value, pressure value, temperature value, distance value, brightness value, resistance value, accelerometer value, gyroscope value, At least one of magnetic force value or air pressure value.
  • m,n is the number of the kth sensor, 1 ⁇ k ⁇ M, and i are all positive integers.
  • the preset time period can be set in different time units (year, month, week, hour, minute, second, millisecond), which is not limited here.
  • step 802 the processor 110 of the electronic device determines N state sequences corresponding to the M sensor units at N sampling moments according to the characteristic information.
  • the processor 110 may group the M sensor units according to the foregoing method, and divide them into L sensor groups. Exemplarily, with reference to FIG. 4B, the processor 110 divides the M sensor units into 22 sensor groups. The processor 110 determines whether the state of the sensor unit is 0 or 1 according to whether the sensor unit has a detection value, and then determines the state of the sensor group according to the state of all sensor units in each sensor group. Finally, the processor 110 composes the state of all sensor groups into a state sequence.
  • the specific manner of determining the state of the sensor group according to the state of all sensor units in each sensor group may be any one or more of the following manners.
  • Method 1 For any sampling time of the N sampling times, the characteristic information of the sensor units in each sensor group is compared with the preset conditions, when the proportion of sensor units with detection values in the first sensor group is greater than At the first threshold, it is determined that the state of the first sensor group is an effective state (for example, satus is 1), otherwise it is an invalid state (for example, satus is 0).
  • the sensor units in the sensor group are touch sensors, and there are U sensor units in the sensor group, among which V sensor units have detection values, then when V/U is greater than the first threshold, the sensor The state of the group is 1; when V/U ⁇ the first threshold, the state of the sensor group is 0. In other words, when most of the sensors in a sensor group are touched, the sensor group is considered to be enabled, and the state of the sensor group is set to 1.
  • the second method is to compare the characteristic information of the sensor units in each sensor group with the preset conditions for any one of the N sampling times. When the detection value of the sensor unit with the detection value in the first sensor group is greater than At the second threshold, it is determined that the state of the sensor group is a valid state (for example, the status is 1), otherwise it is an invalid state (for example, the status is 0).
  • the sensor unit in the sensor group is a pressure sensor
  • the detection value of V sensor units is greater than the second threshold, so it is determined that the state of the sensor group is a valid state (For example, status is 1), otherwise it is invalid (for example, status is 0).
  • the sensing value of a small number of sensors is high (for example, the pressure of the pressure sensor is high)
  • the sensor group is considered to be enabled, and the state of the sensor group is set to 1 at this time.
  • Step 803 The electronic device matches the N state sequences with the reference state sequences in the preset reference state sequence set, and determines the first reference state sequence with the highest similarity.
  • the N state sequences can be represented by the state sequence set ⁇ S 1 , S 2 , S 3 , S 4 ,..., S i ..., S N-1 , S N ⁇ .
  • the electronic device calculates the similarity between the state sequence and the reference state sequence corresponding to each mode of the holding posture mode set.
  • the reference state sequence corresponding to the maximum similarity in P1, P2,..., PJ ⁇ is the reference state sequence corresponding to the state sequence.
  • the electronic device selects the mode corresponding to the maximum similarity in ⁇ P1, P2,..., PJ ⁇ as the mode corresponding to the state sequence.
  • the similarity set P ⁇ 80%,90%,60%...,88% ⁇ between S 1 and the reference state sequence corresponding to the holding posture pattern set ⁇ M1, M2,...,MJ ⁇ , as shown in Table 2. Shown.
  • the reference state sequence corresponding to the maximum similarity of 90% is ⁇ 00001 01011 01100 01010 00 ⁇ corresponding to mode M2, so ⁇ 00001 01011 01100 01010 00 ⁇ is the reference state sequence corresponding to S 1 , or S 1 It has the greatest similarity with mode M2.
  • the electronic device can be calculated S 2, S 3, S 4 , ..., S i ..., S N-1, S N corresponding to the other N-1 reference state sequence, and then from the N reference state in the sequence
  • the reference state sequence with the most occurrences is used as the first reference state sequence.
  • ⁇ 00001 01011 01100 01010 00 ⁇ corresponding to pattern M2 has the most occurrences in the N reference state sequences
  • ⁇ 00001 01011 01100 01010 00 ⁇ corresponding to pattern M2 is the first reference state sequence corresponding to the holding posture.
  • step 804 the electronic device uses the reference holding posture corresponding to the first reference state sequence as the holding posture of the electronic device.
  • the reference holding posture corresponding to the first reference state sequence is the two-handed holding of the left and right sides of the terminal in mode M2
  • it can be determined that the holding posture of the terminal is holding the left and right sides of the terminal with both hands.
  • the mode corresponding to the first reference state sequence may include multiple sub-modes, for example, the sub-modes are as shown in Table 1a.
  • the electronic device can also obtain the detection values of all sensors in the sensor group, calculate the average value of the detection values of each sensor group, and finally obtain the average value of all sensor groups. The electronic device further determines which detection value range the average value falls into, and can determine which sub-mode the user's holding posture belongs to.
  • the parent mode M2 has 7 sensor groups enabled. For these 7 sensor groups, the electronic device calculates the average value of the detection value of each sensor group, and finally obtains the average value of all sensor groups. The electronic device determines which sensing value range the average value falls into, so that the user's holding posture can be determined more accurately. Combining the sensing value of the sensor to construct the sub-pattern, the user's holding posture/action can be determined more accurately, and the control operation in response to the terminal can be determined according to the holding posture/action.
  • the user’s emotional changes can be identified by the detection value of the hold to identify the user’s preference for music; another example is to call the emergency call interface or issue an alarm by firmly grasping the terminal, etc. Protect the personal safety of users, etc.
  • the electronic device may prioritize the state sequence with the reference state sequence.
  • the state of the key sensor group is matched to improve the efficiency of matching.
  • S1 and S2 in Table 2a are two of the N state sequences.
  • the reference state sequence of mode M2 ⁇ 00001 01100 01100 01010 00 ⁇ corresponds to the key sensor identifier of sensor group 9, sensor group 10.
  • Sensor group 12, sensor group 13, sensor group 19 the electronic device can sequentially compare the similarity between the states of these key sensor groups in S1 and M2, and the state of these key sensor groups in S2 and M2. Similarity. Because, for the non-key points that are close to the physical location of the key points, the user may frequently switch between 0 and 1 due to the slight movement of the user's hand. It can be seen from Table 2a that S1 and S2 can be determined to match the reference state sequence of mode M2.
  • the electronic device may further combine sensor data of sensors such as proximity sensors, gyroscope sensors, gravity sensors, acceleration sensors, and the reference holding posture corresponding to the first reference state sequence to determine the holding posture of the terminal .
  • sensors such as proximity sensors, gyroscope sensors, gravity sensors, acceleration sensors, and the reference holding posture corresponding to the first reference state sequence to determine the holding posture of the terminal .
  • the electronic device can determine that the electronic device is in the horizontal screen state based on the sensor data collected by the gyroscope sensor and the gravity sensor, and determine that the electronic device is in a stationary state based on the data collected by the acceleration sensor. Then, the electronic device further determines the two-handed holding posture when the electronic device is in a static and landscaped state according to the reference holding posture corresponding to the first reference state sequence.
  • the electronic device can determine that the electronic device is in the vertical screen state based on the sensor data collected by the gyroscope sensor and the gravity sensor, and determine that the electronic device is in the accelerated state based on the data collected by the acceleration sensor.
  • the device further determines, according to the reference holding posture corresponding to the first reference state sequence, the one-hand holding posture in which the electronic device is in motion and in the vertical screen state.
  • the electronic device can determine that the electronic device is in the locked screen state according to the proximity light sensor, and the electronic device further determines that the electronic device is in the locked screen state according to the reference holding posture corresponding to the first reference state sequence One-handed holding gesture.
  • the electronic device uses the sensor data collected by the gyroscope sensor and the gravity sensor to identify the horizontal or vertical screen status of the electronic device
  • the electronic device needs to first convert the data collected by the built-in sensor from the mobile phone coordinate system to the earth reference Coordinate System.
  • the reason is: Although the various sensors built into electronic devices such as acceleration sensors, gyroscopes, magnetometers, and direction sensors can perceive different movements, directions and external environments, these data are all based on the coordinate system of the electronic device. When the location or direction of the device is changed, the collected data will change accordingly.
  • one way of defining the earth reference coordinate system is as follows: the positive direction of the x-axis is tangent to the ground where the mobile phone is currently located, pointing directly to the east; the positive direction of the y-axis is also tangent to the ground pointing to the magnetic north pole, x
  • the plane where the axis and the z-axis are located is the horizontal plane; the positive direction of the z-axis is perpendicular to the horizontal plane and points to the sky.
  • the determination of the mobile phone coordinate system is related to the mobile phone screen.
  • One way to define the mobile phone coordinate system is as follows: the positive direction of the X axis is the direction pointed to the right from the center of the phone screen plane, and vice versa.
  • the positive direction of the Y-axis is the direction pointed upward from the center of the phone screen plane, perpendicular to the X-axis, and vice versa, the negative direction of the Y-axis; and the positive direction of the Z-axis is perpendicular to the phone screen plane from the center of the screen plane
  • the direction pointed by the positive direction is the negative direction of the Z-axis on the contrary.
  • the embodiment of the present application provides a conversion formula for converting a mobile phone coordinate system to a geodetic reference coordinate system, as shown in Formula 1.
  • X/Y/Z is the sensor data of the mobile phone coordinate system
  • R represents the rotation matrix
  • x, y, z are the sensor data of the earth reference coordinate system.
  • R is composed of three basic rotation matrices, and R is shown in formula 2.
  • the variables a, p, and r respectively represent azimuth, pitch and roll, azimuth represents the angle between the magnetic north pole and the Y axis of the mobile phone coordinate system; pitch represents the angle between the X axis of the mobile phone coordinate system and the horizontal plane, and roll represents the Y axis of the mobile phone coordinate system. The angle with the horizontal plane.
  • the mobile phone can determine the state of the mobile phone in the geodetic coordinate system according to the data of the converted sensor, for example, the state of the vertical screen, the vertical horizontal screen, or the state with a certain tilt angle. portrait or landscape mode.
  • the embodiment of the present application determines the position state of the mobile phone in the geodetic coordinate system through the data generated by the converted gyroscope sensor and the gravity sensor, and characterizes the vertical screen or horizontal screen state of the mobile phone through the position state.
  • the electronic device can optimize the operation of the electronic device based on the holding posture.
  • the electronic device can control interface display, tactile feedback, sound, system configuration, applications, etc., and can trigger corresponding feedback according to different holding postures Or instructions, so that users no longer need to operate the terminal, improve the intelligence of the electronic device, and improve the user experience.
  • the electronic device may collect the user's sensor data within a set time period, and determine the change rule of the user's holding posture during the time period (such as horizontal and vertical screen switching, common gesture commands, etc.), According to the changing law of the holding posture, the system of the electronic device is controlled.
  • the change rule of the user's holding posture such as horizontal and vertical screen switching, common gesture commands, etc.
  • the preset time period is ⁇ 18:00-24:00 ⁇ every day (or larger and smaller time granularity), and the high-frequency holding postures in this period of time are counted, based on the high-frequency Holding posture, adjust the terminal's interface display (brightness, scene mode, etc.), system configuration (power consumption, memory management, etc.), applications (automatically turn on, turn off, or sleep, etc.) to meet the needs of the terminal user and realize the terminal Intelligent management.
  • the electronic device can preset the corresponding relationship between the bad holding posture and the reference state sequence in the holding mode set. When holding the posture, it can trigger system alarms, prompts and other functions.
  • the electronic device determines the first holding posture when the electronic device is held by the user at the first moment, and controls the display screen of the electronic device according to the first holding posture
  • the first interface of the application is displayed.
  • the electronic device determines the second holding posture when the electronic device is held by the user at the second moment, and according to the second holding posture, controls the display screen of the electronic device to display the second interface of the application ;
  • the first holding posture is different from the second holding posture
  • the second interface is different from the first interface.
  • the mobile phone recognizes that the user's holding posture is held within a set period of time (for example, 15 minutes) before the current time as a static vertical screen state according to the above method, and the current mobile application currently running is a video
  • the mobile phone can control the display interface of the display screen to switch to a large-screen display according to the recognition result of the holding gesture, as shown in B in FIG.
  • the mobile phone preferentially allocates available network resources for video playback applications to avoid jams during video playback.
  • the mobile phone automatically completes the song cut, that is, cuts the song to the next music.
  • the processor of the vehicle can obtain data from the steering wheel of the vehicle according to the above method, and determine the driver's holding posture of the steering wheel according to the above method Further, the vehicle can obtain the user’s heart rate, blood pressure and other real-time health data from the user’s wristband or mobile phone and other devices, and combine the above health data and steering wheel holding resources to evaluate the vehicle driver’s mood, stress, and whether he is awake And so on, so as to prompt the driver. As shown in Figure 12, the on-board processor of the vehicle determines that the user is in a holding posture with both hands based on the sensor data from the steering wheel 1202.
  • the on-board processor of the vehicle determines that the user’s heart rate is low based on the heart rhythm data obtained from the wristband 1203. Therefore, The on-board processor displays the warning message "Please enter the service area to rest as soon as possible, and no fatigue driving" through the display screen 1201, and gives a voice warning to the driver through the loudspeaker.
  • the electronic device can use the N state sequences obtained in step 803 to compare the holding mode
  • the reference state sequence corresponding to the pattern of the set is updated.
  • Update method 1 Update the reference state sequence corresponding to the pattern set pattern
  • the electronic device may calculate the first reference state sequence with the highest similarity corresponding to the N state sequences, namely S 1 , S 2 , S 3 , S 4 ,..., S i , ..., S N-1 , the first reference state sequence corresponding to S N , the electronic device can select the first state sequence with the most similarity greater than the set threshold (for example, 90%) from the N state sequences, and use this first state sequence A state sequence replaces the first reference state sequence.
  • the set threshold for example, 90%
  • the mode M2 corresponds to the reference state. 3 sequence ⁇ 0000101011 011000101000 90% similarity S 1 of the p-th occurred, but the reference pattern corresponding to the state series M2 ⁇ 00001 01011 01100 01010 00 ⁇ S 2 with a similarity of 100% appears p-10 times.
  • the mode of the electronic device may in Table 1 M2 grasping gesture pattern set corresponding to the reference state sequence is updated after updating the reference state corresponding to the mode M2 sequences S 1 corresponding to ⁇ 0000000011 011000001101 ⁇ .
  • the gripping posture mode set is shown in Table 4.
  • Terminal holding mode identification Terminal holding posture Reference state sequence of the sensor unit M1 Hold the top and bottom sides of the terminal with both hands 00000 00000 00000 00000 11 M2 Hold the left and right sides of the terminal with both hands 00000 00011 01100 00011 01 M3 Hold the upper side of the terminal with one hand with your right hand ... M4 Hold the bottom side of the terminal with one hand with your right hand ... M5 Hold the upper side of the terminal with one hand with your left hand ... M6 Hold the bottom side of the terminal with one hand with your left hand ...
  • the pattern sequence can be adjusted according to the user's operating habits, so that the user can more accurately match the user's holding posture during the next use.
  • the electronic device can determine the state change of the non-key sensor group from the matching result of the historical state sequence
  • the third threshold that is, the state of the non-critical sensor group basically remains unchanged or does not change much within a certain number of times or within a certain period of time
  • the status of the reference state sequence of M2 can be replaced with the status of these non-critical sensor groups, and the key sensor group identification can be updated.
  • the pattern can be matched more accurately according to the usage habits of the same terminal user, and the effect of more accurate usage can be achieved.
  • Update method 2 Increase the mode of the mode set
  • the electronic device may calculate the first reference state sequence with the highest similarity corresponding to the N state sequences, namely S 1 , S 2 , S 3 , S 4 ,..., S i , ..., S N-1 , the first reference state sequence corresponding to S N , the electronic device can select from the N state sequences the second state sequence whose similarity is less than the fourth threshold (for example, 70%) and the number of occurrences is the most.
  • the fourth threshold for example, 70%
  • the electronic device determines the corresponding terminal holding posture according to the second state sequence, thereby determining the mapping relationship between the terminal holding posture and the second state sequence, and adding a new mode to the holding posture mode set, the The newly added mode includes the mapping relationship between the above-mentioned terminal holding posture and the second state sequence.
  • the reference state sequence ⁇ 00001 01011 01100 01010 00 ⁇ corresponding to the pattern M2 appears p times with S 3 with a similarity of 68%, but the pattern M2 corresponds to The reference state sequence ⁇ 00001 01011 01100 01010 00 ⁇ S 2 with a similarity of 100% appears p-10 times.
  • the electronic device may further be determined based on the S 3 corresponding to the gripping posture of the terminal, the terminal is assumed that S 3 corresponding to the gripping posture to the lower left hand side one hand held terminal, the electronic device may be held in a posture in Table 1 Mode 7 is newly added to the mode set, and the updated grip posture mode set is shown in Table 6.
  • the electronic device can add patterns that appear more frequently, but do not originally belong to the preset pattern set, to the holding pattern set, so that the pattern set can store more different holding postures of the user. Improve the accuracy of the user's next grip posture matching.
  • Update method three delete the pattern of the pattern set
  • the electronic device can count the number of times each mode of the holding pattern set is matched successfully, and match the holding pattern set The patterns in which the number of successes are less than the fifth threshold (for example, 10 times) are deleted.
  • the fifth threshold for example, 10 times
  • the electronic device counts the number of successful matches for each mode of the holding pattern set, as shown in Table 7.
  • the electronic device can delete the pattern M1 with less than 10 matching times, or sort it in The patterns after J+1 are deleted, so that J common grip patterns are always maintained in the grip pattern set M.
  • multiple gripping modes are ranked J-th place side by side, these modes will be retained for the time being until a new round of sorting is used to determine whether to delete.
  • the mode deletion or mode addition is real-time, the mode deletion or mode update operation can be performed periodically, and the mode that is used less frequently is periodically deleted, which is beneficial to release storage space.
  • the above method can obtain patterns with a higher number of matching successes by sorting on time.
  • step 804 the patterns with a higher number of matching successes can be matched first, which can improve the matching efficiency to a certain extent.
  • the embodiments of the present application disclose an electronic device.
  • the electronic device may include a touch screen 1301, where the touch screen 1301 includes a touch panel 1306 and a display screen 1307
  • the above-mentioned devices may be connected through one or more communication buses 1305.
  • the one or more computer programs 1304 are stored in the aforementioned memory 1303 and are configured to be executed by the one or more processors 1302, and the one or more computer programs 1304 include instructions, and the aforementioned instructions can be used for execution as shown in FIG. 6.
  • the embodiment of the present application also provides a computer-readable storage medium, the computer-readable storage medium stores a computer instruction, and when the computer instruction runs on an electronic device, the electronic device executes the above-mentioned related method steps to implement the above-mentioned embodiment Methods.
  • the embodiments of the present application also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the above-mentioned related steps, so as to implement the method in the above-mentioned embodiment.
  • the embodiments of the present application also provide a device.
  • the device may specifically be a chip, component or module.
  • the device may include a processor and a memory connected to each other.
  • the memory is used to store computer execution instructions.
  • the processor can execute the computer-executable instructions stored in the memory, so that the chip executes the methods in the foregoing method embodiments.
  • the electronic devices, computer storage media, computer program products, or chips provided in the embodiments of the present application are all used to execute the corresponding methods provided above. Therefore, the beneficial effects that can be achieved can refer to the corresponding methods provided above. The beneficial effects of the method are not repeated here.
  • the disclosed device and method can be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or It can be integrated into another device, or some features can be discarded or not implemented.
  • the displayed or discussed mutual couplings or direct couplings or communication connections may be indirect couplings or communication connections through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate, and the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods of the various embodiments of the present application.
  • the foregoing storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Telephone Function (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

本申请提供了一种握持姿态检测方法及电子设备,包括:用户握持电子设备时,电子设备可以实时获取电子设备上的传感器单元所采集的传感器数据,然后电子设备根据传感器数据和传感器所在的位置确定传感器单元对应的N个状态序列,该状态序列反映各个传感器单元的状态,因电子设备中预设有参考状态序列集合,所以电子设备可以将N个状态序列与预设的参考状态序列集合中的参考状态序列进行匹配,从中确定出相似度最大的参考状态序列,最终将该参考状态序列所对应的参考握持姿态作为电子设备的握持姿态。该方法用于准确地识别电子设备的握持姿态,以便于基于此提供更精细化的服务,例如确定出不良握持姿态时,对用户进行告警,以提升用户使用体验。

Description

一种握持姿态检测方法及电子设备
相关申请的交叉引用
本申请要求在2020年01月31日提交中国专利局、申请号为202010085464.2、申请名称为“一种握持姿态检测方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种握持姿态检测方法及电子设备。
背景技术
针对智能驾驶领域,随着人工智能的快速发展,辅助驾驶和自动驾驶应运而生。行驶车辆可以在启动辅助驾驶功能或自动驾驶功能的情况下,对驾驶人的驾驶操作或对车辆的周围障碍物进行感知,以实现智能驾驶。针对智能终端领域,随着智能终端的发展,用户对手机等电子设备的依赖越来越大,用户与终端交互的方式多种多样,目前越来越多的电子设备集成有触控屏幕,由于电容式触摸屏具备高灵敏度、响应速度快等特性,在各个领域中受到广泛应用,特别是电子设备(如智能手机)领域,给用户带来了良好的用户体验。然而在实践中发现,用户在使用电子设备过程中或者在驾驶车辆过程中,可以会形成用户自身的使用习惯。例如用户习惯右手握持,或者固定在某个时间段高频操作智能手机等,再比如,用户在正常驾驶状态下握持方向盘的压力取值通常落在固定的取值范围内。
目前智能手机虽然可以简单地识别用户是左手握持还是右手握持,以及横握还是竖握,但是并无法做到实时准确地识别用户握持终端的具体位置,以及握持姿态的连续变化情况等,所以无法准确识别用户对于终端的操作意图,不便于电子设备提供更精细化的服务。
发明内容
本申请提供一种握持姿态检测方法及电子设备,用于准确地识别用户对终端的握持姿态,以便于基于该姿势提供更精细化的服务,提升用户使用体验。
第一方面,本申请实施例提供一种握持姿态检测方法,该方法可以应用于电子设备,该方法包括:电子设备获取在N个采样时刻下电子设备的M个传感器单元的特征信息。其中该特征信息可以包括传感器的标识、传感器的数据等。电子设备可以根据特征信息,确定N个采样时刻下M个传感器单元对应的N个状态序列。电子设备将N个状态序列与预设的参考状态序列集合中的K个参考状态序列进行匹配,从中确定出相似度最大的第一参考状态序列,继而确定该第一参考状态序列对应的参考握持姿态为电子设备的握持姿态。
本申请实施例中,采用上述方法可以更加准确地识别用户对终端的握持姿态,以便于基于该姿势提供更精细化的服务,提升用户使用体验。
在一种可能的实施方式中,电子设备上的M个传感器单元可以被预先划分为L个传感器组,电子设备可以根据获取的特征信息中的传感器单元的标识,确定每个传感器组中的传感器单元的传感器数据。针对N个采样时刻的任意一个采样时刻,电子设备将L个传 感器组中的传感器单元的传感器数据与预设阈值进行比较,根据比较结果确定L个传感器组的状态;最终生成在该采样时刻下M个传感器单元对应的状态序列,该状态序列包括L个传感器组的状态。
在一种可能的实施方式中,针对L个传感器组中的第一传感器组,所述第一传感器组为所述L个传感器组中的任意一个:当第一传感器组中有检测值的传感器单元的占比大于第一阈值时,确定第一传感器组的状态为有效状态,否则为无效状态。其中,占比为有检测值的传感器单元的总数U与所述第一传感器组的传感器单元总数V之间的比值;和/或,当所述第一传感器组中的传感器单元的检测值大于第二阈值时,确定所述第一传感器组的状态为有效状态,否则为无效状态。
在该实施例中,电子设备按照上述方法有利于提升传感器组的状态这一结果的准确性。
在一种可能的实施方式中,当N大于1时,电子设备针对所述N个状态序列中任意一个状态序列:计算所述状态序列与K个参考状态序列之间的K个相似度,并从K个相似度中确定相似度最大的第二参考状态序列;从N个状态序列对应的N个第二参考状态序列中,确定出现概率最大的第二参考状态序列作为所述第一参考状态序列。
在一种可能的实施方式中,电子设备可以通过计算N个状态序列中关键传感器组对应的状态与K个参考状态序列中关键传感器组对应的状态之间的相似度,确定N个状态序列与K个参考状态序列之间的相似度。
在该实施例中,电子设备通过相似度比较,有利于提高终端姿态的识别结果的准确率。
在一种可能的实施方式中,电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,可以对参考状态序列集合中的参考状态序列进行更新,其中一种可能的更新方式是:电子设备从N个状态序列中,确定与所述第一参考状态序列之间相似度大于第三阈值且出现次数最多的第一状态序列;将预设的参考状态序列集合中的第一参考状态序列替换为第一状态序列。
在该实施例中,电子设备通过对预设的参考状态序列集合进行自学习迭代和更新,有利于提高终端姿态的识别结果的准确率。
在一种可能的实施方式中,电子设备从所述N个状态序列中,确定与所述第一参考状态序列之间相似度小于第四阈值且出现次数最多的第二状态序列;根据所述第二状态序列,确定与所述第二状态序列对应的第一握持姿态;在所述预设的参考状态序列集合增加与所述第一握持姿态对应的所述第二状态序列。
在该实施例中,电子设备通过对预设的参考状态序列集合进行自学习迭代和更新,有利于提高终端姿态的识别结果的准确率。
在一种可能的实施方式中,电子设备计算在设定时段内所述参考状态序列集合中各个参考状态序列对应的参考握持姿态出现的概率;根据所述概率,从所述预设的参考状态序列集合中将概率小于第五阈值的参考握持姿态对应的参考状态序列删除。
在该实施例中,电子设备通过对预设的参考状态序列集合进行自学习迭代和更新,有利于提高终端姿态的识别结果的准确率。
在一种可能的实施方式中,电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,电子设备根据在第三设定时段内确定的所述电子设备的多个握持姿态,确定用户的操作意图;根据所述操作意图,配置所述电子设备的系统资源或控制所述电子设备的显示界面。
在该实施例中,电子设备基于握持姿态识别结果,预测用户的操作意图,有利于优化资源配置,提高系统资源的利用率,提高电子设备的智能化。
在一种可能的实施方式中,电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,电子设备确定所述电子设备的握持姿态为预设为不良握持姿态时,输出提示信息,所述提示信息用于提醒用户纠正握持姿态。
在该实施例中,电子设备可以在确定出用户的握持姿态为不良的握持姿态对用户进行提醒,有利于提升用户体验。
第二方面,本申请实施例提供一种显示方法,该方法应用于设有传感器单元的电子设备,所述方法包括:电子设备确定在第一时刻所述电子设备被用户握持时的第一握持姿态,并根据所述第一握持姿态,控制所述电子设备的显示屏显示应用的第一界面。电子设备确定在第二时刻所述电子设备被用户握持时的第二握持姿态,并根据所述第二握持姿态,控制将所述电子设备的显示屏显示所述应用的第二界面;其中,第一握持姿态与第二握持姿态不同,第二界面与第一界面不同。
本申请实施例中,电子设备可以控制在用户的不同握持姿态下显示不同的界面内容,从而提供设备的智能化程度,提升用户的体验。
在一种可能的设计中,电子设备可以按照第一方面提供的方法确定第一握持姿态和第二握持姿态,具体不再重复赘述。
第三方面,本申请实施例提供一种电子设备,包括传感器、触摸屏、处理器和存储器,其中,存储器用于存储一个或多个计算机程序;当存储器存储的一个或多个计算机程序被处理器执行时,使得该电子设备能够实现上述任一方面的任意一种可能的设计的方法。
第四方面,本申请实施例还提供一种装置,该装置包括执行上述任一方面的任意一种可能的设计的方法的模块/单元。这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第五方面,本申请实施例中还提供一种计算机可读存储介质,所述计算机可读存储介质包括计算机程序,当计算机程序在电子设备上运行时,使得所述电子设备执行上述任一方面的任意一种可能的设计的方法。
第六方面,本申请实施例还提供一种包含计算机程序产品,当所述计算机程序产品在终端上运行时,使得所述电子设备执行上述任一方面的任意一种可能的设计的方法。
第七方面,本申请实施例还提供一种芯片,所述芯片与存储器耦合,用于执行所述存储器中存储的计算机程序,以执行上述任一方面的任意一种可能的设计的方法。
附图说明
图1为本申请实施例提供的一种手机结构示意图;
图2为本申请实施例提供的一种安卓操作系统结构示意图;
图3为本申请实施例提供的一种电子设备的结构示意图;
图4A为本申请实施例提供的一种电子设备的传感器部署结构示意图;
图4B为本申请实施例提供的一种传感器编码方式示意图;
图5为本申请实施例提供的一种适用场景结构示意图;
图6为本申请实施例提供的一种握持姿态模式集的构建方法流程示意图;
图7A和图7B为本申请实施例提供的一种握持方式示意图;
图8为本申请实施例提供的一种握持姿态检测方法流程示意图;
图9A至图9C为本申请实施例提供的一种握持场景示意图;
图10本申请实施例提供的一种坐标系切换示意图;
图11为本申请实施例的一组手机界面示意图;
图12为本申请实施例提供的一种车载界面示意图;
图13为本申请实施例提供的一种终端结构示意图。
具体实施方式
为了使本申请实施例的目的、技术方案和优点更加清楚,下面将结合说明书附图以及具体的实施方式对本申请实施例中的技术方案进行详细的说明。
目前,电子设备虽然可以通过获取电子设备的触摸传感器当前的传感器数据,确定出用户的触摸区域,根据电子设备当前所处的横握状态或竖握状态及触摸区域,确定电子设备当前的握持姿态,但该方法由于不同用户的手部存在物理差异以及不同的用户的使用习惯不同,按照上述方法无法实现精准识别握持姿态。为此,本申请实施例提供一种握持姿态检测方法及电子设备,该方法可以实现将传感器的传感器数据处理成状态序列,通过将该状态序列和预设的参考状态序列集合中的K个参考状态序列进行匹配,从而准确地确定出握持姿态。
本申请实施例提供的握持姿态检测方法可以应用于电子设备中。在一些实施例中,电子设备可以是包含诸如个人数字助理和/或音乐播放器等功能的便携式终端,诸如手机、平板电脑、具备无线通讯功能的可穿戴设备(如智能手表)、车载设备等。便携式终端的示例性实施例包括但不限于搭载
Figure PCTCN2020122954-appb-000001
或者其它操作系统的便携式终端。上述便携式终端也可以是诸如具有触敏表面(例如触控面板)的膝上型计算机(Laptop)等。还应当理解的是,在其他一些实施例中,上述终端也可以是具有触敏表面(例如触控面板)的台式计算机。
下文以电子设备是手机为例,图1示出了手机100的结构示意图。
手机100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块151,无线通信模块152,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。其中传感器模块180可以包括陀螺仪传感器180A,加速度传感器180B,接近光传感器180G、指纹传感器180H,触摸传感器180K(当然,手机100还可以包括其它传感器,比如温度传感器,压力传感器、距离传感器、磁传感器、环境光传感器、气压传感器、骨传导传感器等,图中未示出)。
可以理解的是,本发明实施例示意的结构并不构成对手机100的具体限定。在本申请另一些实施例中,手机100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号 处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。其中,控制器可以是手机100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
处理器110可以运行本申请实施例提供的握持姿态检测方法,用于准确地识别用户的终端握持姿态,以便于终端基于该姿势提供更精细化的服务,提升用户使用体验。处理器110可以包括不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的握持姿态检测方法,比如握持姿态检测方法中部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,手机100可以包括1个或N个显示屏194,N为大于1的正整数。本申请实施例中,显示屏194上的触摸传感器和/或压力传感器可以采集用户的触摸操作,触摸传感器和/或压力传感器可以将检测到的传感器数据传递给处理器110,以便处理器110确定传感器单元对应的状态。
在本申请实施例中,显示屏194可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接显示屏。当处理器110运行本申请实施例提供的握持姿态检测方法后,处理器110可以基于终端握持姿态控制显示屏194上的显示界面。
摄像头193(前置摄像头或者后置摄像头,或者一个摄像头既可作为前置摄像头,也可作为后置摄像头)用于捕获静态图像或视频。通常,摄像头193可以包括感光元件比如镜头组和图像传感器,其中,镜头组包括多个透镜(凸透镜或凹透镜),用于采集待拍摄物体反射的光信号,并将采集的光信号传递给图像传感器。图像传感器根据所述光信号生成待拍摄物体的原始图像。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行手机100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,应用程序(比如相机应用,微信应用等)的代码等。存储数据区可存储手机100使用过程中所创建的数据(比如传感器采集的数据,以及预设的参考状态序列集合)等。
内部存储器121还可以存储本申请实施例提供的终端握持姿态检测算法的代码。当内部存储器121中存储的终端握持姿态检测算法的代码被处理器110运行时,处理器110可以控制显示屏194上的显示界面。
此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器, 例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
当然,本申请实施例提供的终端握持姿态检测算法的代码还可以存储在外部存储器中。这种情况下,处理器110可以通过外部存储器接口120运行存储在外部存储器中的终端握持姿态检测算法的代码,处理器110确定电子设备的握持姿态,进而根据该握持姿态控制显示屏194上的显示界面。
下面介绍传感器模块180的功能。
陀螺仪传感器180A,可以用于确定手机100的运动姿态。在一些实施例中,可以通过陀螺仪传感器180A确定手机100围绕三个轴(即,x,y和z轴)的角速度。即陀螺仪传感器180A可以用于检测手机100当前的运动状态,比如抖动还是静止,比如横屏还是竖屏。
当本申请实施例中的显示屏为可折叠屏时,陀螺仪传感器180A可用于检测作用于显示屏194上的折叠或者展开操作。陀螺仪传感器180A可以将检测到的折叠操作或者展开操作作为事件上报给处理器110,以确定显示屏194的折叠状态或展开状态。
加速度传感器180B可检测手机100在各个方向上(一般为三轴)加速度的大小。当本申请实施例中的显示屏为可折叠屏时,加速度传感器180B可用于检测作用于显示屏194上的折叠或者展开操作。加速度传感器180B可以将检测到的折叠操作或者展开操作作为事件上报给处理器110,以确定显示屏194的折叠状态或展开状态。
压力传感器180C用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180C可以设置于显示屏194或者壳体部分。压力传感器180C的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器180C,电极之间的电容改变。手机100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏194,手机100根据压力传感器180C检测所述触摸操作强度。手机100也可以根据压力传感器180C的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度大于第一压力阈值的触摸操作作用于壳体两侧时,执行查看未读消息的指令。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。手机通过发光二极管向外发射红外光。手机使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定手机附近有物体。当检测到不充分的反射光时,手机可以确定手机附近没有物体。当本申请实施例中的显示屏为不可折叠屏时,接近光传感器180G可以设置在显示屏194的屏幕上侧,接近光传感器180G可根据红外信号的光程差来检测是否有人脸靠近屏幕。当本申请实施例中的显示屏为可折叠屏时,接近光传感器180G可以设置在可折叠的显示屏194的第一屏上,接近光传感器180G可根据红外信号的光程差来检测第一屏与第二屏的折叠角度或者展开角度的大小。
指纹传感器180H用于采集指纹。手机100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194或者壳体部分,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的传感器数据传递给处理器110,以便处理器110根据传感器数据确定传感器单元的状态,继而确定出电子设备 的传感器单元对应的状态序列。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于手机100的表面,与显示屏194所处的位置不同。
示例性的,手机100的显示屏194显示主界面,主界面中包括多个应用(比如相机应用、微信应用等)的图标。用户通过触摸传感器180K点击主界面中相机应用的图标,触发处理器110启动相机应用,打开摄像头193。显示屏194显示相机应用的界面,例如取景界面。
手机100的无线通信功能可以通过天线1,天线1,移动通信模块151,无线通信模块152,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。手机100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块151可以提供应用在手机100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块151可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块151可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块151还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块151的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块151的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块151或其他功能模块设置在同一个器件中。
无线通信模块152可以提供应用在手机100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块152可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块152经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块152还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
另外,手机100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。手机100可以接收按键190输入,产生与手机100的用户设置以及功能控制有关的键信号输入。手机100可以利用马达191产生振动提示(比如来电振动提示)。手机100中的指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。 手机100中的SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和手机100的接触和分离。
应理解,在实际应用中,手机100可以包括比图1所示的更多或更少的部件,本申请实施例不作限定。图示手机100仅是一个范例,并且手机100可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
电子设备的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明电子设备的软件结构。图2是本发明实施例的电子设备的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图2所示,应用程序包可以包括电话、相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图2所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程 管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。其中,硬件可以指的是各类传感器,例如本申请实施例中涉及的加速度传感器、陀螺仪传感器、触摸传感器、压力传感器等。
以上图1和图2分别为本申请实施例适用的电子设备的硬件结构和软件结构,下面结合本申请实施例的握持姿态检测方法,示例性说明该电子设备的软件以及硬件的工作流程。
作为一种示例,硬件层中的传感器可以采集数据。例如,硬件层中的陀螺仪传感器可以检测到显示屏是否处于横屏状态,硬件层中的触摸传感器可以检测到用户在显示区域以及设备壳体上的操作,然后处理器110从电子设备的各个传感器单元获取在N个采样时刻所采集的传感器数据,利用传感器数据确定出终端的握持姿态。例如,电子设备的硬件层检测到用户的触摸操作,触摸传感器180K同时采集传感器数据,该触摸操作触发产生相应的硬件中断,该硬件中断被发送给内核层,经由内核层发送给系统库。该系统库根据该传感器数据确定传感器单元的状态,进而确定出电子设备的全部传感器单元对应的状态序列,系统库将该状态序列与预设的参考状态序列集合中的参考状态序列进行匹配,根据匹配出的第一参考状态序列对应的参考握持姿态,确定与该触摸操作对应的当前握持姿态。
图3为本申请实施例所提供的握持姿态检测方法并不局限适用于具有传统显示屏的电子设备,同样适用于具有折叠屏、各种异型屏或全面屏的电子设备中。示例性地,该电子设的显示屏可以是如图3中的(a)示出了一种曲面屏,曲面屏的边缘301有一定的曲率;再比如,该电子设备的显示屏可以是如图3中的(b)和(c)示出了一种折叠屏,图3中的(b)为折叠屏处于半折叠状态,图3中的(c)为折叠屏处于完全折叠状态,当折叠屏处于半折叠状态或者完全折叠状态时,可弯折区域303为折叠屏的边缘显示区域。
本申请实施例中,电子设备的壳体和显示屏均可以设置传感器单元,例如触摸传感器180K、压力传感器180C、接近光线传感器180G等。示例性地,如图4A所示,电子设备的正面(显示屏)、背面、以及上下左右侧面均可以部署有传感器单元。本申请实施例中,预先可以对终端上的全部传感器单元进行编码,具体地,每个传感器单元可用数字或坐标进行编码,用以指示传感器单元的位置。示意性地,图4B示出了一种传感器单元编码方式。其中,每一个格子表示一个传感器单元,采用二维坐标对每个传感器单元进行编码,坐标值(Xm,Yn)唯一指示一个传感器单元的位置。
图4B中,当传感器单元检测到数值时,传感器单元的状态(status)可以用1表示,当传感器单元未检测到数值时,传感器单元的状态(status)可以用0表示。例如,当触摸传感器的检测值(value)为123,则表示该触摸传感器被用户触摸到,因此传感器单元的状态可以用1表示,当触摸传感器的检测值(value)为0,则表示该触摸传感器没有用户 触摸到,因此该传感器单元的状态可以用0表示。
在一种可能的实现中,本申请实施例中,还可以预先将终端上的全部M个传感器单元划分为多个传感器组。示例性地,图4B中,一个粗框黑线内的所有传感器单元构成一个传感器组,图4B中,示意性地示出了22个传感器组。需要说明的是,本领域技术人员可以根据电子设备的类型,以及实际需要划分为不同数量的传感器组,或者不划分传感器组,本申请实施例对此并不作限定。
另外,传感器组的状态可以用0或1表示,传感器组的状态是由该传感器组中的所有传感器单元的状态决定的。示例性地,若该传感器组中的传感器单元为触摸传感器,当该传感器组中存在检测值的传感器单元的占比大于第一阈值时,则确定该传感器组的状态为1,否则为0;若该传感器组中的传感器单元为触摸传感器和压力传感器,当该传感器组中存在检测值的传感器单元的占比小于该第一阈值,但该传感器组中的有u个传感器单元的检测值大于第二阈值时,则确定该传感器组的状态为1,否则为0,其中u为正整数。也就是说,该传感器组中虽然只有少部分传感器被用户触摸到,但检测到用户的压力值很大,所以仍然确定该传感器组的状态为1。
在另一种可能的实施例中,本申请实施例所提供的方法也可以应用于如图5所示的车辆中。其中,车辆方向盘502设有数据采集模块(或者数据采集设备),车载设备501设有数据处理模块(或者数据处理设备)。也就是说,车辆方向盘上设置有传感器单元,可以按照上述方法预先可以对车辆方向盘上的全部传感器单元进行编码。车载设备可以获取车辆方向盘上的传感器单元所采集的数据,进而确定用户的终端握持姿态。需要说明的是,图5中的车辆方向盘502除了设有数据采集模块,还可以同时集成数据处理模块(或者数据处理设备),也就是说数据采集模块和数据处理模块可以分别设置在不同的设置中,也可以设置在同一设备中,本申请对此不作限定。
另外,需要说明的是,数据采集模块(或者数据采集设备)还可以是医疗检测设备或者智能穿戴设备,通过采集医疗检测设备的传感器数据并进行模式匹配,实时获得用户的健康数据,提前预知用户的健康状态变化,提供状态预警以及治疗建议。
为了实现用户准确地识别用户的终端握持姿态,本申请实施例提供了一种握持姿态检测方法。该方法中,电子设备可以获取N个采样时刻的传感器数据,并基于传感器数据,生成与每个采样时刻对应的传感器单元的状态序列。针对每个采样时刻的状态序列,电子设备将该状态序列与预设的参考状态序列集合中的参考状态序列进行匹配,确定出相似度最高的第一参考状态序列,从而将第一参考状态序列所对应的参考握持姿态作为终端的当前握持姿态,以便于终端基于该姿势提供更精细化的服务,提升用户使用体验。
实施例一
为了实现上述握持姿态检测方法,本申请实施例中,需要先构建握持姿态模式集,该握持姿态模式集包括参考状态序列和参考握持姿态之间的对应关系。即本申请实施例提供一种握持姿态模式集的构建方法,如图6所示,该方法主要包括如下步骤。
步骤601,电子设备接收与第一参考握持姿态对应的n次操作。
示例性地,手机接收与左手单手握持终端的上侧面对应的操作,或手机接收与双手握持终端的上下侧面对应的操作,n为正整数。
步骤602,电子设备的处理器110获取传感器单元n次采集的数据。
例如,处理器110获取压力传感器的压力值,或者获取触摸传感器的触摸检测值等。
步骤603,电子设备确定传感器单元对应的n个状态序列。
步骤604,电子设备将该n个状态序列中出现概率最高的状态序列作为第一参考握持姿态对应的第一参考状态序列,建立第一参考握持姿态和第一参考状态序列之间的对应关系。
换句话说,先基于统计学或及机器学习算法,统计区域范围内一定数量用户的常用握持姿态,将这些常用握持姿态作为参考握持姿态,然后针对每种握持姿态,按照上述方法确定出与之对应的参考状态序列,最终生成包括参考状态序列和参考握持姿态之间的对应关系的握持姿态模式集。
在一种可能的实施例中,在电子设备进行出厂设置时,可以在电子设备内置该握持姿态模式集,其中,握持姿态模式集={M1,M2,…,MJ}中可以包括每种模式下的参考握持姿态和参考状态序列之间的对应关系。示例性地,针对图4A所示的电子设备,握持姿态模式集可以如表1所示。需要说明的是,该握持姿态模式集仅是示例性的说明,在其它可能的情况下,可以并不仅限于表1中的该握持姿态模式集的形式。
表1
终端握持模式标识 终端握持姿态 传感器单元的参考状态序列
M1 双手握持终端的上下侧面 00000 00000 00000 00000 11
M2 双手握持终端的左右侧面 00001 01011 01100 01010 00
M3 右手单手握持终端的上侧面
M4 右手单手握持终端的下侧面
M5 左手单手握持终端的上侧面
M6 左手单手握持终端的下侧面
其中,表1中的传感器单元的状态序列由22位数字组成,其中,22对应的是图4B中的传感器组的个数,0代表该传感器组没有被握持,1代表该传感器组被握持。例如,如图7A所示,用户双手握持终端的左右侧面的中部位置,22个传感器组所组成的状态序列为{00001 01011 01100 01010 00}。图7A中,灰色区域表示用户的手部握持时传感器单元的位置及其检测值。如图7B所示,用户双手握持终端的左右侧面的下部位置,22个传感器组所组成的状态序列为{00000 00011 01100 00011 01}。
在一种可能的实施例中,基于上述握持姿态模式集,电子设备可以根据所采集的传感器数据的具体数值,对握持姿态模式集合的每种模式进行细分,例如模式M2可以进一步包括模式M21、模式M22、模式M23等。
示例性,以握持姿态模式集合的模式M2(双手握持终端的左右侧面)为例,电子设备可以进一步地利用传感器数据确定出接触点数量、接触面积、接触位置、检测值的大小等信息,然后电子设备从接触点数量、接触面积、接触位置、感测值的大小等方面构建模式M2的各种子模式。
如表1a所示,电子设备可以根据一个或多个传感器数据,将模式M2的子模式M21、M22、M23,分别划分为轻度握持、中度握持、重度握持下的双手左右侧面持握姿势。具体地,电子设备可以针对不同传感器类型设置相应的感测值的范围,不同范围对应不同的子模式。例如对于压力传感器而言,假设压力传感器的检测值范围为0-30,将该检测值范围划分为以下三个范围,分别是(0~5)、[5~20)、[20~30),分别表示为轻度握持、中度握持、 重度握持。
表1a
Figure PCTCN2020122954-appb-000002
需要说明的是,上述表1中的握持姿态模式集还可以划分为一个手指触控的模式、两个手指触控的模式、,…十个手指触控的模式等等,在此不再一一列举。
在一种可能的实施例中,电子在生成握持姿态模式集时,还可以标识每种模式对应的参考状态序列中的关键传感器组。以便于电子设备根据传感器采集的数据确定出第一状态序列之后,优先将第一状态序列中的状态为模式中的关键传感器组标识对应的状态进行匹配,提高匹配的效率。其中,不同模式中对应的关键传感器组标识是不同的。这样做,主要是考虑到经过大量数据统计表明,对于终端的同一用户,握持形成习惯后,部分传感器在特定握持区域范围内是大概率会被使能的,这些握持区域可以被设定为该终端用户的关键点,关键点是一个或多个,关键点的值可以为0或1,0表示特定模式下该位置必然不会被使能,1表示特定模式下该位置必然被使能。示例性地,包括关键传感器组标识的握持姿态模式集如表1b所示。
表1b
Figure PCTCN2020122954-appb-000003
Figure PCTCN2020122954-appb-000004
实施例二
基于上述握持姿态模式集,本申请实施例提供一种握持姿态检测方法,如图8所示,该方法可以在上述电子设备中实现。该方法包括如下步骤。
步骤801,电子设备的处理器110获取N个采样时刻下电子设备上的M个传感器单元的特征信息。
示例性地,用户握持电子设备,电子设备上的传感器单元实时采集信息,获取传感器单元的特征信息,该传感器单元的特征信息可以包括传感器单元采集的数据,传感器单元的标识(例如传感器单元的编码)。
假设,在预设时间段内,采集N次传感器数据d,根据传感器数据d和传感器的位置标识确定N个第一状态序列Si。其中,矩阵Di(1≤i≤N)为时间ti(1≤i≤N)时刻采集的传感器阵列中每个传感器的状态(status为0或1)和检测值(value)组成的矩阵,如果value>0,则与坐标(Xm,Yn)对应的状态设为1,如果value=0,则坐标(Xm,Yn)对应的状态设为0,矩阵Di的一种示例性形式的如下:
Figure PCTCN2020122954-appb-000005
需要说明的是,传感器数据(也就是检测值)可以是一个或多个,包括但不限于电容值、压力值、温度值、距离值、亮度值、电阻值、加速计值、陀螺仪值、磁力值或气压值中的至少一个。m,n为第k个传感器的编号,1≤k≤M,i均为正整数。预设时间段可以是以不同的时间单位(年、月、周、时、分、秒、毫秒)进行设定的,在此不作限定。
步骤802,电子设备的处理器110根据特征信息,确定在N个采样时刻M个传感器单元对应的N个状态序列。
其中,M、N为正整数。具体来说,处理器110可以按照上述方法对M个传感器单元进行分组,划分为L个传感器组。示例性地,结合图4B来说,处理器110将M个传感器单元划分为22个传感器组。处理器110根据传感器单元是否存在检测值,确定传感器单元的状态为0或者1,再根据每个传感器组中的所有传感器单元的状态确定该传感器组的状态。最终,处理器110将所有传感器组的状态组成状态序列。
针对电子设备根据每个传感器组中的所有传感器单元的状态确定该传感器组的状态的具体方式可以是如下方式中的任意一种或者多种。
方式一,针对N个采样时刻的任意一个采样时刻,将每个传感器组中的传感器单元的特征信息与预设条件进行比较,当第一传感器组中的有检测值的传感器单元的占比大于第一阈值时,确定所述第一传感器组的状态为有效状态(例如satus为1),否则为无效状态(例如satus为0)。
示例性地,若该传感器组中的传感器单元为触摸传感器,传感器组中有U个传感器单元,其中有检测值的传感器单元为V个,则当V/U大于第一阈值时,则该传感器组的状态为1;当V/U≤第一阈值时,则该传感器组的状态为0。换句话来说,当一个传感器组的大部分传感器均被触摸,即认为该传感器组被使能,此时设置该传感器组的状态为1。
方式二,针对N个采样时刻的任意一个采样时刻,将每个传感器组中的传感器单元的特征信息与预设条件进行比较,当第一传感器组中的有检测值的传感器单元的检测值大于第二阈值时,确定该传感器组的状态为有效状态(例如status为1),否则为无效状态(例如status为0)。
示例性地,若该传感器组中的传感器单元为压力传感器,传感器组中有U个传感器单元,其中,有V个传感器单元的检测值大于第二阈值,因此确定该传感器组的状态为有效状态(例如status为1),否则为无效状态(例如status为0)。换句话来说,即使少量传感器的感测值较高(例如压力传感器压力较大),即认为该传感器组被使能,此时设置该传感器组的状态为1。
步骤803,电子设备将N个状态序列与预设的参考状态序列集合中的参考状态序列进行匹配,确定相似度最高的第一参考状态序列。
具体来说,假设用S代表状态序列,则N个状态序列可以用状态序列集合{S 1,S 2,S 3,S 4,…,S i…,S N-1,S N}来表示。针对该状态序列集合中的任意一个状态序列,电子设备计算该状态序列与握持姿态模式集合的每个模式对应的参考状态序列之间的相似度。假设,状态序列与握持姿态模式集{M1,M2,…,MJ}对应的参考状态序列之间的相似度用相似度集合P={P1,P2,…,PJ}表示,电子设备选择{P1,P2,…,PJ}中最大相似度对应的参考状态序列为该状态序列对应的参考状态序列。或者,电子设备选择{P1,P2,…,PJ}中最大相似度对应的模式为该状态序列对应的模式。
例如,S 1与握持姿态模式集{M1,M2,…,MJ}对应的参考状态序列之间的相似度集合P={80%,90%,60%…,88%},如表2所示。
表2
Figure PCTCN2020122954-appb-000006
其中,表2中,相似度最大90%对应的参考状态序列为模式M2对应的{00001 01011 01100 01010 00},所以{00001 01011 01100 01010 00}为S 1对应的参考状态序列,或者说S 1与模式M2的相似度最大。
依次类推,电子设备可以计算出S 2,S 3,S 4,…,S i…,S N-1,S N对应的其它N-1个参考状态序列,然后从N个参考状态序列中选择出现次数最多的参考状态序列作为第一参考状态序列。假设N个参考状态序列中模式M2对应的{00001 01011 01100 01010 00}出现的次数最多,则模式M2对应的{00001 01011 01100 01010 00}为该握持姿态对应的第一参考状态序列。
步骤804,电子设备将第一参考状态序列所对应的参考握持姿态作为电子设备的握持姿态。
示例性地,假设第一参考状态序列对应的参考握持姿态为模式M2中的双手握持终端的左右侧面,则可以确定出终端的握持姿态为双手握持终端的左右侧面。
在一种可能的实施例中,在上述步骤804中,若第一参考状态序列所对应的模式可以包括多个子模式,例如,子模式如表1a所示。则针对每个状态为1的传感器组,电子设备还可以获取该传感器组中的所有传感器的检测值,分别计算每个传感器组的检测值平均值,最终得到所有传感器组的平均值。电子设备进一步判断该平均值落入哪个检测值取值范围,即可确定用户的握持姿态属于哪种子模式对应的握持姿态。
例如,表1a中,母模式M2有7个传感器组被使能,针对该7个传感器组,电子设备分别计算每个传感器组的检测值平均值,最终得到所有传感器组的平均值。电子设备判断该平均值落入哪个感测值范围,从而能够更精确地确定用户的握持姿态。结合传感器的感测值构建子模式,可以更精准的确定用户的握持姿势/动作,并根据握持姿势/动作确定对终端进行响应的控制操作。例如,用户在听音乐时,通过握持的检测值可以识别用户的情绪变化,以识别用户对于音乐的喜好程度;又如,通过用力握紧终端调取紧急呼叫界面或者发出警报等,以此保障用户人身安全等。
在一种可能的实施例中,若握持姿态模式集合的每种模式对应的参考状态序列标识有关键传感器组,则针对每个状态序列,电子设备可以优先将该状态序列与参考状态序列中关键传感器组的状态进行匹配,以提高匹配的效率。
示例地,表2a中的S1和S2为N个状态序列的中两个状态序列,其中,模式M2的参考状态序列{00001 01011 01100 01010 00}对应的关键传感器标识为传感器组9、传感器组10、传感器组12、传感器组13、传感器组19,电子设备可以依次比较S1与M2中的这些关键传感器组的状态之间的相似度,以及S2与M2中的这些关键传感器组的状态之间的相似度。因,对于关键点物理位置临近的非关键点,由于用户手部的微小移动,可能会在0和1之间频繁切换。从表2a可见,S1和S2可以判定与模式M2的参考状态序列相匹配。
表2a
  状态序列
M2(预设模式) 00001 010 11 01100 010 10 00
S1 00000 000 11 01100 000 11 01
S2 00000 000 11 01100 000 10 01
在上述步骤805中,电子设备可以进一步结合接近光传感器、陀螺仪传感器、重力传感器以及加速度等传感器的传感器数据,以及第一参考状态序列所对应的参考握持姿态,确定出终端的握持姿态。
示例性地,如图9A所示,假设电子设备可以根据陀螺仪传感器、重力传感器所采集的传感器数据,确定电子设备处于横屏状态,通过加速度传感器所采集的数据,确定电子设备处于静止状态,则电子设备进一步根据第一参考状态序列所对应的参考握持姿态,确定出电子设备处于静止且横屏状态下的双手握持姿态。
再比如,如图9B所示,电子设备可以根据陀螺仪传感器、重力传感器所采集的传感器数据,确定电子设备处于竖屏状态,通过加速度传感器所采集的数据,确定电子设备处于加速状态,则电子设备进一步根据第一参考状态序列所对应的参考握持姿态,确定出电子设备处于运动且竖屏状态下的单手握持姿态。
再比如,如图9C所示,电子设备可以根据接近光传感器确定电子设备处于锁屏状态,则电子设备进一步根据第一参考状态序列所对应的参考握持姿态,确定出电子设备处于锁屏状态下的单手握持姿态。
需要说明的是,在电子设备利用陀螺仪传感器、重力传感器所采集的传感器数据识别电子设备的横屏或竖屏状态之前,电子设备需要先将内置传感器采集的数据从手机坐标系转换到大地参考坐标系。原因是:虽然电子设备内置的多种传感器如加速度传感器、陀螺仪、磁力计、方向传感器等可以对不同的运动、方向和外部环境进行感知,但这些数据都是基于电子设备坐标系,当电子设备放置的位置或者方向发生改变时所采集到的数据会随之改变。以电子设备为手机举例来说,现实中由于手机用户使用习惯的个性化,如手机放置位不同,是握持在手中,还是放在裤兜或手提包里,都将会直接影响到设备状态的识别结果。也就是说在实际应用中鉴于用户使用习惯的多样性和手机的摆放位置是任意的,因此需要将手机内置传感器采集的数据从手机坐标系转换到统一的参考坐标系(例如大地坐标系)中,这样转换后的传感器的数据有更清晰的物理含义,有助于准确识别电子设备的设备状态。
如图10中的a所示,大地参考坐标系的一种定义方式如下:x轴正方向正切手机当前所在位置的地面,直指东方;y轴正方向同样正切于该地面指向磁北极,x轴和z轴所在平面为水平面;z轴正方向则垂直于水平面指向天空。
如图10中的b所示,手机坐标系的确定与手机屏幕相关,手机坐标系的一种定义方式如下:X轴的正方向为手机屏幕平面中心向右所指的方向,反之为X轴的负方向;Y轴的正方向为手机屏幕平面中心向上所指的方向,垂直于X轴,反之为Y轴的负方向;而Z轴的正方向为垂直于手机屏幕平面从屏幕平面中心向正上所指的方向,反之为Z轴的负方向。
本申请实施例提供了一种将手机坐标系转换到大地参考坐标系转换公式,如公式1所示。
Figure PCTCN2020122954-appb-000007
其中,X/Y/Z为手机坐标系的传感器数据,R表示旋转矩阵,x、y、z为大地参考坐标 系的传感器数据。
其中,R由三个基本旋转矩阵复合而成,R如公式2所示。
Figure PCTCN2020122954-appb-000008
其中,变量a、p、r分别表示azimuth、pitch和roll,azimuth表示磁北极和手机坐标系Y轴的夹角;pitch表示手机坐标系X轴和水平面的夹角,roll表示手机坐标系Y轴和水平面的夹角。
也就是说基于上述坐标系转换方法,手机可以根据转换后的传感器的数据,确定手机在大地坐标系中的状态,例如是竖直竖屏状态、竖直横屏状态,或者存在一定倾斜角的竖屏或者横屏状态。具体地,本申请实施例通过转换后的陀螺仪传感器和重力传感器生成的数据,确定手机在大地坐标系中所处的位置状态,通过位置状态来表征手机的竖屏或横屏状态。
基于上述实施例提供的握持姿态检测方法,当电子设备识别得到握持姿态之后,电子设备可以基于握持姿态,实现对该电子设备运行的优化。
在一种可能的实施例中,在确定出用户的握持姿态之后,电子设备可以控制界面显示、触感反馈、声音、系统配置、应用程序等,可以根据不同的握持姿态而触发相应的反馈或指令,使得用户无需再对终端进行操作,提高了电子设备的智能化,提高用户的使用体验。
在一种可能的实施例中,电子设备可以采集设定时间段内用户的传感器数据,确定在该段时间段内用户的握持姿态的变化规律(例如横竖屏切换,常用手势指令等),根据握持姿态的变化规律,对电子设备的系统进行控制。例如,预设设定时间段为每天的{18:00-24:00}(或更大、更小的时间粒度),统计在该段时间段内高频的握持姿势,基于高频的握持姿势,调整终端的界面显示(亮度、情景模式等)、系统配置(功耗、内存管理等)、应用程序(自动开启、关闭或休眠等),以适应该终端用户的需求,实现终端的智能化管理。
在一种可能的实施例中,电子设备可以在握持模式集合预设不良的握持姿态和参考状态序列之间的对应关系,当电子设备按照上述方法确定出用户的握持姿态为不良的握持姿态时,则可以触发系统报警、提示等功能。
在一种可能的实施例中,电子设备确定在第一时刻所述电子设备被用户握持时的第一握持姿态,并根据所述第一握持姿态,控制所述电子设备的显示屏显示应用的第一界面。电子设备确定在第二时刻所述电子设备被用户握持时的第二握持姿态,并根据所述第二握持姿态,控制将所述电子设备的显示屏显示所述应用的第二界面;其中,第一握持姿态与第二握持姿态不同,第二界面与第一界面不同。示例性地,假设手机按照上述方法识别用户的握持姿态在距离当前时刻之前的设定时长内(例如15分钟)为静止竖屏状态下的双手握持,且当前手机当前运行的应用为视频播放类应用,如图11中的A所示,则手机可 以根据该握持姿态识别结果,控制显示屏的显示界面切换为大屏显示,如图11中的B所示。另外,手机优先为视频类播放应用分配可用的网络资源,以避免视频播放过程中发生卡顿。
又一示例性地,假设手机当前在运行音乐类应用,假设手机检测到用户的握持姿态为紧握手机的两侧,则手机自动完成切歌,即切歌至下一首音乐。
又一示例性地,假设电子设备为车辆,车辆方向盘上设置有传感器单元,可以按照上述方法,车辆的处理器可以从车辆方向盘获取数据,并按照上述方法确定出驾驶员对方向盘的握持姿态,进一步地,车辆可以从用户佩戴的手环或手机等设备获取用户的心律、血压等实时健康数据,结合上述健康数据和方向盘的握持资源评估车辆驾驶员的情绪、压力、是否处于清醒状态等,从而对驾驶员进行提示。如图12所示,车辆的车载处理器根据从方向盘1202的传感器数据,确定用户处于双手握持姿态,另外车辆的车载处理器从手环1203获取的心律数据确定用户的心律偏低,因此,车载处理器通过显示屏1201显示告警信息“请尽快驶入服务区休息,禁止疲劳驾驶”,以及通过外放音箱对驾驶员进行语音告警。
在一种可能的实施方式中,在用户使用电子设备的过程中,例如每天的{18:00-24:00},电子设备可以利用在步骤803所得到的N个状态序列,对握持模式集合的模式所对应的参考状态序列进行更新。
更新方式一:对模式集合模式所对应的参考状态序列进行更新
具体来说,在步骤805中,电子设备可以计算得到N个状态序列对应的相似度最高的第一参考状态序列,即S 1、S 2,S 3,S 4,…,S i…,S N-1,S N所对应的第一参考状态序列,电子设备可以从N个状态序列中选择出相似度大于设定阈值(例如90%)且出现次数最多的第一状态序列,用该第一状态序列替换第一参考状态序列。
假设,如表3所示,在N个状态序列中,与模式M2对应的参考状态序列{00001 01011 01100 01010 00}相似度达到90%的S 1出现p次,但是模式M2对应的参考状态序列{00001 01011 01100 01010 00}相似度达到100%的S 2出现p-10次。
表3
出现的模式 状态序列 出现次数 最大相似度
S 1 00000 00011 01100 00011 01 p 90%
S2 00001 01011 01100 01010 00 p-10 100%
因此,电子设备可以将表1中握持姿态模式集合的模式M2对应的参考状态序列进行更新,更新之后模式M2对应的参考状态序列为S 1对应的{00000 00011 01100 00011 01}。示例性地,握持姿态模式集如表4所示。
表4
终端握持模式标识 终端握持姿态 传感器单元的参考状态序列
M1 双手握持终端的上下侧面 00000 00000 00000 00000 11
M2 双手握持终端的左右侧面 00000 00011 01100 00011 01
M3 右手单手握持终端的上侧面
M4 右手单手握持终端的下侧面
M5 左手单手握持终端的上侧面
M6 左手单手握持终端的下侧面
可见,通过上述握持模式集的更新方式,使得在满足相似度之后,模式序列可根据用户的操作习惯进行调整,使得用户下一次使用能够更精确地匹配出用户的握持姿态。
在一种可能的实施例中,当握持模式集合的模式对应的参考状态序列标识有关键传感器组标识,则电子设备可以从历史的状态序列的匹配结果中确定非关键传感器组的状态的变化规律,若这些历史数据中的非关键传感器组的状态相似度大于第三阈值(也就是说,在一定次数或一定时间段内非关键传感器组的状态基本保持不变或变化不大),则可以用这些非关键传感器组的状态替换M2的参考状态序列的状态,并更新关键传感器组标识。
示例性地,如表4a所示,通过统计发现,第1-4,20位在一定次数或一定时间段内基本保持不变/变化不大,因此,将可用第1-4,20位的状态替换M2的参考状态序列的状态,并更新关键传感器组标识,表1b中的M2更新之后,如表4b所示。
表4a
Figure PCTCN2020122954-appb-000009
表4b
Figure PCTCN2020122954-appb-000010
可见,通过该方式,可以根据同一终端用户的使用习惯,更精确地匹配模式,达到越用越准的效果。
更新方式二:增加模式集合的模式
具体来说,在步骤805中,电子设备可以计算得到N个状态序列对应的相似度最高的第一参考状态序列,即S 1、S 2,S 3,S 4,…,S i…,S N-1,S N所对应的第一参考状态序列,电子设备可以从N个状态序列中选择出相似度小于第四阈值(例如70%)且出现次数最多的第二状态序列。然后电子设备根据该第二状态序列确定与之对应的终端握持姿态,从而确定出终端握持姿态和该第二状态序列之间的映射关系,并在握持姿态模式集新增一个模式,该新增的模式包括上述终端握持姿态和该第二状态序列之间的映射关系。
假设,如表5所示,在N个第一参考状态序列中,与模式M2对应的参考状态序列{00001 01011 01100 01010 00}相似度达到68%的S 3出现p次,但是模式M2对应的参考状态序列{00001 01011 01100 01010 00}相似度达到100%的S 2出现p-10次。
表5
出现的模式 第一状态序列 出现次数 最大相似度
S 3 00000 00011 00010 01000 00 p 68%
S2 00001 01011 01100 01010 00 p-10 100%
因此,电子设备可以进一步地根据该S 3确定对应的终端握持姿态,假设S 3对应的终端握持姿态为左手单手握持终端的左下侧面,则电子设备可以在表1中握持姿态模式集合新增模式7,更新之后的握持姿态模式集如表6所示。
表6
Figure PCTCN2020122954-appb-000011
可见,通过该方式,电子设备可以及时地将出现次数较多,但是原并不属于预设模式集的模式加入到握持模式集合,使得模式集能够存储更多该用户的不同握持姿态,提高用户下一次匹配出用户的握持姿态的准确率。
更新方式三:删除模式集合的模式
具体来说,在用户使用电子设备的过程中,例如每天的{18:00-24:00},电子设备可以统计握持模式集合的每个模式被匹配成功的次数,将握持模式集合匹配成功次数小于第五阈值(例如10次)的模式进行删除。
假设,在设定时间段内,电子设备统计得到握持模式集合的每个模式被匹配成功的次数如表7所示,电子设备可以将匹配次数小于10次的模式M1删除,或者将排序在第J+1之后的模式删除,使得握持模式集M中始终保持J种常用握持模式。在可能的情形下,若 多个握持模式并列排位第J位,则这些模式暂时都保留,直到新一轮排序再确定是否删除。
表7
握持模式 参考状态序列 匹配成功次数
M2 00001 01011 01100 01010 00 100
MJ 00000 10011 00000 00011 11 88
M1 00000 00000 00000 00000 11 0
需要说明的是,模式的删除或者模式的增加是实时的,可以周期性地执行模式的删除或者模式的更新操作,定期对使用频率较低的模式进行删除,有利于释放存储空间。同时,按时上述方法通过排序获得匹配成功次数较高的模式,在进行步骤804时,可以优先对匹配成功次数高的模式进行匹配,一定程度上可提高匹配效率。
在本申请的另一些实施例中,本申请实施例公开了一种电子设备,如图13所示,该电子设备可以包括:触摸屏1301,其中,该触摸屏1301包括触控面板1306和显示屏1307;一个或多个处理器1302;存储器1303;一个或多个应用程序(未示出);以及一个或多个计算机程序1304。上述各器件可以通过一个或多个通信总线1305连接。其中该一个或多个计算机程序1304被存储在上述存储器1303中并被配置为被该一个或多个处理器1302执行,该一个或多个计算机程序1304包括指令,上述指令可以用于执行如图6、图8相应实施例中的各个步骤。
本申请实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的方法。
本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的方法。
其中,本申请实施例提供的电子设备、计算机存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其他的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以丢弃,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合 或通信连接,可以是电性,机械或其他的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (20)

  1. 一种握持姿态检测方法,应用于电子设备,其特征在于,包括:
    所述电子设备获取在N个采样时刻下所述电子设备上的M个传感器单元的特征信息,M和N为正整数;
    所述电子设备根据所述特征信息,确定在所述N个采样时刻下所述M个传感器单元对应的N个状态序列;
    所述电子设备将所述N个状态序列与预设的参考状态序列集合中的K个参考状态序列进行匹配,从所述K个参考状态序列中确定相似度最大的第一参考状态序列,K为正整数;
    所述电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备上的M个传感器单元被划分成L个传感器组,所述特征信息包括所述M个传感器单元的传感器数据和传感器单元的标识,其中L为正整数;
    所述电子设备所述根据所述特征信息,确定在所述N个采样时刻下所述M个传感器单元对应的N个状态序列,包括:
    所述电子设备根据所述特征信息中的传感器单元的标识,确定每个传感器组中的传感器单元的传感器数据;
    针对所述N个采样时刻的任意一个采样时刻:所述电子设备将所述L个传感器组中的传感器单元的传感器数据与预设阈值进行比较,根据比较结果确定所述L个传感器组的状态;生成在所述采样时刻M个传感器单元对应的状态序列,所述状态序列包括所述L个传感器组的状态。
  3. 根据权利要求2所述的方法,其特征在于,所述电子设备将所述L个传感器组中的传感器单元的传感器数据与预设阈值进行比较,根据比较结果确定所述L个传感器组的状态,包括:
    针对所述L个传感器组中的第一传感器组,所述第一传感器组为所述L个传感器组中的任意一个:
    当所述第一传感器组中有检测值的传感器单元的占比大于第一阈值时,所述电子设备确定所述第一传感器组的状态为有效状态,否则为无效状态,所述占比为有检测值的传感器单元的总数U与所述第一传感器组的传感器单元总数V之间的比值;
    和/或,当所述第一传感器组中的传感器单元的检测值大于第二阈值时,所述电子设备确定所述第一传感器组的状态为有效状态,否则为无效状态。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,当所述N大于1时,所述电子设备将所述N个状态序列与预设的参考状态序列集合中的K个参考状态序列进行匹配,从所述K个参考状态序列中确定相似度最大的第一参考状态序列,包括:
    针对所述N个状态序列中任意一个状态序列:所述电子设备计算所述状态序列与K个参考状态序列之间的K个相似度,并从K个相似度中确定相似度最大的一个参考状态序列;
    所述电子设备从N个状态序列对应的N个参考状态序列中,确定出现次数最多的参考状态序列作为所述第一参考状态序列。
  5. 根据权利要求4所述的方法,其特征在于,所述电子设备计算所述N个状态序列 与K个参考状态序列之间的相似度,包括:
    所述电子设备计算所述N个状态序列中关键传感器组对应的状态与K个参考状态序列中关键传感器组对应的状态之间的相似度,所述关键传感器组为所述L个传感器组中的传感器组。
  6. 根据权利要求1至5任一项所述的方法,其特征在于,所述电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,还包括:
    所述电子设备从所述N个状态序列中,确定与所述第一参考状态序列之间相似度大于第三阈值且出现次数最多的第一状态序列;
    所述电子设备将所述预设的参考状态序列集合中的所述第一参考状态序列替换为所述第一状态序列。
  7. 根据权利要求1至5任一项所述的方法,其特征在于,所述电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,还包括:
    所述电子设备从所述N个状态序列中,确定与所述第一参考状态序列之间相似度小于第四阈值且出现次数最多的第二状态序列;
    所述电子设备根据所述第二状态序列,确定与所述第二状态序列对应的第一握持姿态;
    所述电子设备在所述预设的参考状态序列集合增加与所述第一握持姿态对应的所述第二状态序列。
  8. 根据权利要求1至5任一项所述的方法,其特征在于,所述电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,还包括:
    所述电子设备计算在设定时段内所述参考状态序列集合中各个参考状态序列对应的参考握持姿态出现的概率;
    所述电子设备根据所述概率,从所述预设的参考状态序列集合中将概率小于第五阈值的参考握持姿态对应的参考状态序列删除。
  9. 一种电子设备,其特征在于,所述电子设备包括M个传感器单元、处理器和存储器;
    所述存储器存储有程序指令;
    所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备执行:
    获取在N个采样时刻下所述电子设备上的M个传感器单元的特征信息,M和N为正整数;
    根据所述特征信息,确定在所述N个采样时刻下所述M个传感器单元对应的N个状态序列;
    将所述N个状态序列与预设的参考状态序列集合中的K个参考状态序列进行匹配,从所述K个参考状态序列中确定相似度最大的第一参考状态序列,K为正整数;
    将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态。
  10. 根据权利要求9所述的电子设备,其特征在于,所述电子设备上的M个传感器单元被划分成L个传感器组,所述特征信息包括所述M个传感器单元的传感器数据和传感器单元的标识,其中L为正整数;
    所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备具体执行:
    根据所述特征信息中的传感器单元的标识,确定每个传感器组中的传感器单元的传感器数据;
    针对所述N个采样时刻的任意一个采样时刻:将所述L个传感器组中的传感器单元的传感器数据与预设阈值进行比较,根据比较结果确定所述L个传感器组的状态;生成在所述采样时刻M个传感器单元对应的状态序列,所述状态序列包括L个传感器组的状态。
  11. 根据权利要求10所述的电子设备,其特征在于,所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备具体执行:
    针对所述L个传感器组中的第一传感器组,所述第一传感器组为所述L个传感器组中的任意一个:
    当所述第一传感器组中有检测值的传感器单元的占比大于第一阈值时,确定所述第一传感器组的状态为有效状态,否则为无效状态,所述占比为有检测值的传感器单元的总数U与所述第一传感器组的传感器单元总数V之间的比值;
    和/或,当所述第一传感器组中的传感器单元的检测值大于第二阈值时,确定所述第一传感器组的状态为有效状态,否则为无效状态。
  12. 根据权利要求9至11任一项所述的电子设备,其特征在于,当所述N大于1时,所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备具体执行:
    针对所述N个状态序列中任意一个状态序列:计算所述状态序列与K个参考状态序列之间的K个相似度,并从K个相似度中确定相似度最大的一个参考状态序列;
    从N个状态序列对应的N个参考状态序列中,确定出现次数最多的参考状态序列作为所述第一参考状态序列。
  13. 根据权利要求12所述的电子设备,其特征在于,所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备具体执行:
    计算所述N个状态序列中关键传感器组对应的状态与K个参考状态序列中关键传感器组对应的状态之间的相似度,所述关键传感器组为所述L个传感器组中的传感器组。
  14. 根据权利要求9至13任一项所述的电子设备,其特征在于,所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的握持姿态之后,还执行:
    从所述N个状态序列中,确定与所述第一参考状态序列之间相似度大于第三阈值且出现次数最多的第一状态序列;
    将所述预设的参考状态序列集合中的所述第一参考状态序列替换为所述第一状态序列。
  15. 根据权利要求9至13任一项所述的电子设备,其特征在于,所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备还执行:
    从所述N个状态序列中,确定与所述第一参考状态序列之间相似度小于第四阈值且出现次数最多的第二状态序列;
    根据所述第二状态序列,确定与所述第二状态序列对应的第一握持姿态;
    在所述预设的参考状态序列集合增加与所述第一握持姿态对应的所述第二状态序列。
  16. 根据权利要求9至13任一项所述的电子设备,其特征在于,所述处理器用于运行所述存储器存储的所述程序指令,使得所述电子设备还执行:
    计算在设定时段内所述参考状态序列集合中各个参考状态序列对应的参考握持姿态出现的概率;
    根据所述概率,从所述预设的参考状态序列集合中将概率小于第五阈值的参考握持姿 态对应的参考状态序列删除。
  17. 一种显示方法,应用于设有传感器单元的电子设备,其特征在于,所述方法包括:
    所述电子设备确定在第一时刻所述电子设备被用户握持时的第一握持姿态,并根据所述第一握持姿态,控制所述电子设备的显示屏显示应用的第一界面;
    所述电子设备确定在第二时刻所述电子设备被用户握持时的第二握持姿态,并根据所述第二握持姿态,控制将所述电子设备的显示屏显示所述应用的第二界面;其中,所述第一握持姿态与所述第二握持姿态不同,所述第二界面与所述第一界面不同。
  18. 根据权利要求17所述的方法,其特征在于,所述电子设备确定在第一时刻所述电子设备被用户握持时的第一握持姿态,包括:
    所述电子设备获取在所述第一时刻之前N个采样时刻下所述电子设备上的M个传感器单元的第一特征信息,M和N为正整数;
    所述电子设备根据所述第一特征信息,确定在所述N个采样时刻下所述M个传感器单元对应的N个状态序列;
    所述电子设备将所述N个状态序列与预设的参考状态序列集合中的K个参考状态序列进行匹配,从所述K个参考状态序列中确定相似度最大的第一参考状态序列,K为正整数;
    所述电子设备将所述第一参考状态序列所对应的参考握持姿态作为所述电子设备的第一握持姿态;
    所述电子设备确定在第二时刻所述电子设备被用户握持时的第二握持姿态,包括:
    所述电子设备获取在所述第一时刻之后,所述第二时刻之前的N个采样时刻下所述电子设备上的M个传感器单元的第二特征信息;
    所述电子设备根据所述第二特征信息,确定在所述N个采样时刻下所述M个传感器单元对应的N个状态序列;
    所述电子设备将所述N个状态序列与预设的参考状态序列集合中的K个参考状态序列进行匹配,从所述K个参考状态序列中确定相似度最大的第二参考状态序列,K为正整数;
    所述电子设备将所述第二参考状态序列所对应的参考握持姿态作为所述电子设备的第二握持姿态。
  19. 根据权利要求17或18所述的方法,其特征在于,所述传感器单元包括触摸传感器、压力传感器、陀螺仪传感器、重力传感器中的至少一种;
    所述特征信息包括所述传感器数据和所述传感器单元的标识。
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括程序指令,当所述程序指令在电子设备上运行时,使得所述电子设备执行如权利要求1至8任一项所述的方法,或17至19任一项所述的方法。
PCT/CN2020/122954 2020-01-31 2020-10-22 一种握持姿态检测方法及电子设备 WO2021151320A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010085464.2 2020-01-31
CN202010085464.2A CN113206913B (zh) 2020-01-31 2020-01-31 一种握持姿态检测方法及电子设备

Publications (1)

Publication Number Publication Date
WO2021151320A1 true WO2021151320A1 (zh) 2021-08-05

Family

ID=77024949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/122954 WO2021151320A1 (zh) 2020-01-31 2020-10-22 一种握持姿态检测方法及电子设备

Country Status (2)

Country Link
CN (1) CN113206913B (zh)
WO (1) WO2021151320A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113815707B (zh) * 2021-09-27 2023-04-07 同济大学 一种驾驶员方向盘握持姿态监测方法及系统
CN114038443B (zh) * 2021-11-23 2023-02-14 杭州逗酷软件科技有限公司 亮度调节方法及相关装置
WO2024020899A1 (zh) * 2022-07-27 2024-02-01 北京小米移动软件有限公司 握持姿态的识别方法、装置、设备、存储介质及芯片

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104731514A (zh) * 2015-04-09 2015-06-24 努比亚技术有限公司 触摸操作区域单握触摸操作的识别方法及装置
CN104793824A (zh) * 2015-04-23 2015-07-22 惠州Tcl移动通信有限公司 一种移动终端的唤醒和解锁方法及移动终端
US9268407B1 (en) * 2012-10-10 2016-02-23 Amazon Technologies, Inc. Interface elements for managing gesture control
CN105630158A (zh) * 2015-12-16 2016-06-01 广东欧珀移动通信有限公司 传感器数据处理方法、装置及终端设备
CN108259670A (zh) * 2018-01-22 2018-07-06 广东欧珀移动通信有限公司 电子装置、跌落处理方法及相关产品
CN109561210A (zh) * 2018-11-26 2019-04-02 努比亚技术有限公司 一种交互调控方法、设备及计算机可读存储介质
CN110007816A (zh) * 2019-02-26 2019-07-12 努比亚技术有限公司 一种显示区域确定方法、终端及计算机可读存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101556197A (zh) * 2009-04-16 2009-10-14 舒少龙 用于确定车辆座椅占用状况的传感器
KR101644370B1 (ko) * 2014-10-23 2016-08-01 현대모비스 주식회사 물체 검출 장치 및 그 동작 방법
CN107562353A (zh) * 2017-07-17 2018-01-09 努比亚技术有限公司 一种显示界面控制方法、终端及计算机可读存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9268407B1 (en) * 2012-10-10 2016-02-23 Amazon Technologies, Inc. Interface elements for managing gesture control
US20160252968A1 (en) * 2012-10-10 2016-09-01 Amazon Technologies, Inc. Interface elements for managing gesture control
CN104731514A (zh) * 2015-04-09 2015-06-24 努比亚技术有限公司 触摸操作区域单握触摸操作的识别方法及装置
CN104793824A (zh) * 2015-04-23 2015-07-22 惠州Tcl移动通信有限公司 一种移动终端的唤醒和解锁方法及移动终端
CN105630158A (zh) * 2015-12-16 2016-06-01 广东欧珀移动通信有限公司 传感器数据处理方法、装置及终端设备
CN108259670A (zh) * 2018-01-22 2018-07-06 广东欧珀移动通信有限公司 电子装置、跌落处理方法及相关产品
CN109561210A (zh) * 2018-11-26 2019-04-02 努比亚技术有限公司 一种交互调控方法、设备及计算机可读存储介质
CN110007816A (zh) * 2019-02-26 2019-07-12 努比亚技术有限公司 一种显示区域确定方法、终端及计算机可读存储介质

Also Published As

Publication number Publication date
CN113206913B (zh) 2022-05-10
CN113206913A (zh) 2021-08-03

Similar Documents

Publication Publication Date Title
WO2020181988A1 (zh) 一种语音控制方法及电子设备
WO2021151320A1 (zh) 一种握持姿态检测方法及电子设备
WO2021164313A1 (zh) 界面布局方法、装置及系统
WO2020155876A1 (zh) 控制屏幕显示的方法及电子设备
WO2021052016A1 (zh) 一种人体姿态检测方法及电子设备
CN109313519A (zh) 包括力传感器的电子设备
CN111258700B (zh) 图标管理方法及智能终端
WO2021000943A1 (zh) 一种指纹开关的管理方法及装置
WO2021037223A1 (zh) 一种触控方法与电子设备
WO2022100221A1 (zh) 检索处理方法、装置及存储介质
WO2023124729A1 (zh) 查询数据的方法、装置、设备及存储介质
CN113742366B (zh) 数据处理方法、装置、计算机设备及存储介质
WO2021213084A1 (zh) 应用通知管理方法和电子设备
JP2024503629A (ja) ウィジェット表示方法及び電子デバイス
WO2022194190A1 (zh) 调整触摸手势的识别参数的数值范围的方法和装置
CN111524528B (zh) 防录音检测的语音唤醒方法及装置
WO2024093103A9 (zh) 笔迹处理方法、终端设备及芯片系统
WO2023045806A1 (zh) 触控屏中的位置信息计算方法和电子设备
CN111381996A (zh) 内存异常处理方法及装置
WO2022228138A1 (zh) 一种处理服务卡片的方法和电子设备
WO2022228043A1 (zh) 显示方法、电子设备、存储介质和程序产品
WO2022117002A1 (zh) 电子设备间的无线充电方法、存储介质及其电子设备
CN115390738A (zh) 卷轴屏开合方法及相关产品
CN111510553B (zh) 一种运动轨迹展示方法、装置和可读存储介质
WO2023124129A1 (zh) 显示二维码的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20917120

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20917120

Country of ref document: EP

Kind code of ref document: A1