WO2022095915A1 - Electronic device, method and storage medium - Google Patents

Electronic device, method and storage medium Download PDF

Info

Publication number
WO2022095915A1
WO2022095915A1 PCT/CN2021/128556 CN2021128556W WO2022095915A1 WO 2022095915 A1 WO2022095915 A1 WO 2022095915A1 CN 2021128556 W CN2021128556 W CN 2021128556W WO 2022095915 A1 WO2022095915 A1 WO 2022095915A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
handheld device
camera
distance data
processing circuit
Prior art date
Application number
PCT/CN2021/128556
Other languages
French (fr)
Chinese (zh)
Inventor
闫冬生
Original Assignee
索尼半导体解决方案公司
闫冬生
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 索尼半导体解决方案公司, 闫冬生 filed Critical 索尼半导体解决方案公司
Priority to CN202180073970.2A priority Critical patent/CN116391163A/en
Publication of WO2022095915A1 publication Critical patent/WO2022095915A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04812Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/02Constructional features of telephone sets
    • H04M1/0202Portable telephone sets, e.g. cordless phones, mobile phones or bar type handsets
    • H04M1/026Details of the structure or mounting of specific components
    • H04M1/0264Details of the structure or mounting of specific components for a camera module assembly
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/725Cordless telephones

Definitions

  • the present disclosure generally relates to handheld user equipment such as smart phones, and more particularly, to a method of determining using hands of a user that is operating the handheld device.
  • handheld user devices such as smartphones have become increasingly popular.
  • Such handheld devices are usually equipped with a touch screen, which, in addition to the purpose of visual presentation, can also provide interaction with the user by means of a UI containing virtual operating components (such as virtual keys) displayed on the screen, thereby reducing the need for physical keys. use.
  • a UI containing virtual operating components such as virtual keys
  • the screen size of handheld devices is getting larger and larger, which virtually increases the difficulty for users to operate the device while holding the device with one hand.
  • it is possible to optimize the UI design so that the user's hand can easily complete each operation on the screen, it is necessary to determine whether the user is currently operating the device with the left hand or the right hand, so as to present the UI in a form suitable for operation.
  • Patent Document 1 discloses using proximity sensors 201-209 and 201'-209' mounted on the mobile phone to detect which hand of the user is holding the mobile phone, as shown in Fig. 1A.
  • Patent Document 2 JP2012023554A discloses that a sensor such as a gyroscope or an accelerometer is used to detect and judge which hand of the user is holding the mobile phone, as shown in FIG. 1B .
  • the gyroscope or accelerometer is affected by the motion state or gravity (vertical direction), and the effect of this judgment method is not good when the user is walking or lying down, so its application scenarios are limited.
  • Patent Document 3 discloses that a common RGB camera is used to capture the user's face, and the user's hand is judged according to the different areas of the left and right faces. But RGB sensors are limited by luminance conditions and are not accurate because the area ratios are ambiguous under different head poses.
  • an electronic device for a handheld device including processing circuitry configured to obtain a distance associated with one or more physical characteristics of a user of the handheld device determining the direction of the handheld device relative to the user based on the distance data; determining the hand of the user currently operating the handheld device based on the determined method; The user's hand-operated user interface (UI).
  • processing circuitry configured to obtain a distance associated with one or more physical characteristics of a user of the handheld device determining the direction of the handheld device relative to the user based on the distance data; determining the hand of the user currently operating the handheld device based on the determined method; The user's hand-operated user interface (UI).
  • UI user interface
  • a handheld device comprising: a depth camera for obtaining a depth image about a user of the handheld device; and a processing circuit configured to: based on the depth image, obtaining distance data associated with one or more physical features of the user; determining a direction of the handheld device relative to the user based on the distance data; determining that the user is currently operating the user based on the determined direction
  • a user interface suitable for the user's operation by the user's hand is presented.
  • a method for a handheld device comprising: obtaining distance data associated with one or more physical characteristics of a user of the handheld device; determining based on the distance data The direction of the handheld device relative to the user; judging the hand of the user currently operating the handheld device based on the determined direction; User Interface (UI).
  • UI User Interface
  • a non-transitory computer-readable storage medium storing executable instructions that, when executed, implement the above-described method.
  • FIG. 1A-1B show schematic diagrams of judging the use of hands in the prior art
  • FIG. 2 shows the hardware configuration of a smartphone as an example of a handheld device
  • 3A shows a block diagram of an electronic device according to the present disclosure
  • 3B shows a flowchart of a method according to the present disclosure
  • 4A-4B illustrate an example of determining the orientation of a handheld device relative to a user
  • 5A-5B illustrate another example of determining the orientation of a handheld device relative to a user
  • 6A-6B illustrate another example of determining the orientation of a handheld device relative to a user.
  • FIG. 2 is a block diagram showing a hardware configuration of a smartphone 1600 as an example of the handheld device of the present disclosure. It should be understood that although the present disclosure is described by taking a smartphone as an example, the handheld device to which the technical contents of the present disclosure can be applied is not limited to a smartphone, but can be implemented as various types of mobile terminals, such as tablet computers, personal Computers (PCs), Palmtops, Smart Assistants, Portable Game Terminals, Portable Digital Cameras, and the like.
  • PCs personal Computers
  • Palmtops Palmtops
  • Smart Assistants Portable Game Terminals
  • Portable Digital Cameras Portable Digital Cameras
  • Smartphone 1600 includes processor 1601, memory 1602, storage device 1603, external connection interface 1604, RGB camera 1605, depth camera 1606, sensor 1607, microphone 1608, input device 1609, display device 1610, speaker 1611, wireless communication subsystem 1612 , bus 1617 , battery 1618 , etc., these components are connected to each other through bus 1617 .
  • the battery 1618 provides power to the various components of the smartphone 1600 via a feeder line (not shown in FIG. 2).
  • the processor 1601 may be implemented as, for example, a CPU or a system on a chip (SoC), and generally controls the functions of the smartphone 1600 .
  • the processor 1601 may include various implementations of digital circuitry, analog circuitry, or mixed-signal (combination of analog and digital) circuitry, such as integrated circuits (ICs), application specific integrated circuits (ASICs), to perform functions in a computing system ), portions or circuits of a separate processor core, an entire processor core, a separate processor, a programmable hardware device such as a field programmable gate array (FPGA), and/or a system including multiple processors .
  • ICs integrated circuits
  • ASICs application specific integrated circuits
  • FPGA field programmable gate array
  • the memory 1602 is used to store data and programs executed by the processor 1601, and may be, for example, volatile memory and/or non-volatile memory, including but not limited to random access memory (RAM), dynamic random access memory (DRAM), Static random access memory (SRAM), read only memory (ROM), flash memory, etc.
  • the storage device 1603 may include a storage medium such as a semiconductor memory and a hard disk in addition to the memory 1602 .
  • the external connection interface 1604 is an interface for connecting external devices such as memory cards and Universal Serial Bus (USB) devices to the smartphone 1600 .
  • USB Universal Serial Bus
  • Input device 1609 includes, for example, a keyboard, keypad, keys, or switches for receiving operations or information input from a user.
  • the display device 1610 includes a screen such as a liquid crystal display (LCD) and an organic light emitting diode (OLED) display, and displays an output image of the smartphone 1600 such as a UI including virtual operation parts.
  • the display device 1610 may be implemented as a touch screen that includes a touch sensor configured to detect a touch on the screen of the display device 1610, whereby the touch screen acts as both a display device and an input device.
  • the microphone 1608 converts the sound input to the smartphone 1600 into an audio signal. Using voice recognition technology, instructions from the user can be input into the smartphone 1600 in the form of speech through the microphone 1608, whereby the microphone 1608 can also act as an input device.
  • the speaker 1611 converts the audio signal output from the smartphone 1600 into sound.
  • the wireless communication subsystem 1612 is used to perform wireless communication.
  • the wireless communication subsystem 1612 may support any cellular communication scheme (such as 4G LTE or 5G NR, etc.), or other types of wireless communication schemes, such as short-range wireless communication schemes, near field communication schemes, and wireless local area network (LAN) schemes.
  • Wireless communication subsystem 1612 may include, for example, a BB processor and RF circuitry, antenna switches, antennas, and the like.
  • the smartphone 1600 also includes an RGB camera 1605 for capturing images of the outside world.
  • the RGB camera 1605 includes optical components and image sensors such as charge coupled device (CCD) sensors and complementary metal oxide semiconductor (CMOS) sensors. The light passing through the optics is captured and photoelectrically converted by an image sensor, and processed by, for example, processor 1601 to generate an image.
  • Smartphone 1600 may include more than one RGB camera 1605, such as multiple cameras on the front or rear.
  • the RGB camera 1605 can be composed of a wide-angle camera module, an ultra-wide-angle camera module, a telephoto camera module, etc., to provide excellent shooting effects.
  • the smartphone 1600 may also include a depth camera 1606 for obtaining depth information.
  • Depth cameras are sometimes referred to as 3D cameras. As the name suggests, depth cameras are used to detect the depth of field distance in the shooting space. They are increasingly used in applications such as object recognition, behavior recognition, and scene modeling. Compared with the traditional camera, the depth camera adds a depth measurement function, which makes it more convenient and accurate to perceive the surrounding environment and changes.
  • the depth camera 1606 may include, but is not limited to, a Time of Flight (TOF) camera, a Structured Light (Structured Light) camera, a Binocular Stereo Vision camera, and the like. The following is a brief introduction to these depth cameras.
  • TOF Time of Flight
  • Structured Light Structured Light
  • Binocular Stereo Vision camera Binocular Stereo Vision
  • TOF cameras use active detection methods to obtain distances by measuring the time of flight of light.
  • a light transmitter such as an LED or laser diode
  • modulated light pulses generally invisible infrared light
  • TOF technology can be generally divided into two types according to different modulation methods: pulsed modulation (Pulsed Modulation) and continuous wave modulation (Continuous Wave Modulation).
  • the arithmetic unit can quickly and accurately calculate the depth distance to the object by using the phase shift between the emitted pulsed light and the received pulsed light.
  • the TOF camera can simultaneously obtain the depth information of the entire image, that is, the two-dimensional depth point cloud information, and the value of each point on the image represents the value of the distance between the camera and the object (ie, the depth value), instead of Like RGB cameras are light intensity values.
  • the main advantages of TOF cameras are: 1) The detection distance is long. In the case of enough laser energy, it can reach several tens of meters. 2) Less interference from ambient light.
  • TOF technology also has some obvious problems, such as high requirements for equipment, especially the time measurement module; large computational resource consumption, multiple sampling and integration are required to detect phase offset, and the computational load is large; limited to resource consumption and filtering, frame rate There is no way to achieve a higher resolution.
  • Structured light cameras also use active detection methods.
  • the principle is to project light with certain structural features (such as encoded images or pseudo-random speckle spots) onto the object to be photographed through a light emitter (such as a near-infrared laser).
  • the emitted structured light pattern is then captured by a special light receiver.
  • This kind of light with a certain structure will capture different image phase information due to different depth regions of the photographed object, and then the change of this structure is converted into depth information by the arithmetic unit according to the principle of triangulation.
  • structured light cameras have less computational complexity, lower power consumption, and higher accuracy at close range, so they have great advantages in face recognition and gesture recognition, but they are easily disturbed by ambient light and have poor outdoor experience. , and as the detection distance increases, the accuracy will deteriorate.
  • the binocular stereo camera adopts passive detection method. Binocular stereo vision is an important form of machine vision. It uses imaging devices (such as traditional RGB cameras) to obtain two images of the object to be measured from different positions, and calculates the positional deviation between the corresponding points of the images based on the principle of parallax. A method for obtaining 3D geometric information of an object. Binocular stereo vision cameras have low hardware requirements, ordinary RGB cameras can be used, and are suitable for indoor and outdoor, as long as the light is suitable. However, its shortcomings are also very obvious, such as being very sensitive to ambient lighting, not suitable for scenes with monotonous lack of texture, high computational complexity, and the baseline limits the measurement range, and so on.
  • Smartphone 1600 may also include other sensors 1607, such as light sensors, gravity sensors, proximity sensors, fingerprint sensors, and the like.
  • the smart phone 1600 is usually equipped with a gyro sensor, an acceleration sensor, etc. in order to detect its motion state or attitude.
  • the handheld device can adaptively present the UI according to which hand the user is operating the device with.
  • An electronic device for adaptive UI presentation and a method for performing the same according to the present disclosure will be described below in conjunction with FIGS. 3A and 3B .
  • FIG. 3A shows a block diagram of electronic device 1000 .
  • Electronic device 1000 includes processing circuit 1001 and potentially other circuits.
  • the processing circuit 1001 may, for example, be implemented as a processor such as the processor 1601 described above.
  • the processing circuit 1001 includes an acquisition unit 1002, a determination unit 1003, and a presentation unit 1004, and may be configured to perform the method shown in FIG. 3B.
  • the acquisition unit 1002 of the processing circuit 1001 is configured to acquire distance data associated with one or more physical characteristics of the user of the handheld device (ie, perform step S1001 in FIG. 3B ).
  • the physical features of the user may include various facial features or other physical features.
  • the physical features of the user include features such as forehead, nose tip, mouth, chin, and the like.
  • the user's physical features include a pair of left-right symmetrical features, such as eyes, shoulders, and the like.
  • body features the acquiring unit 1002 acquires the distance data may depend on the specific determination method, which will be described in detail below.
  • the distance data associated with the physical features of the user may be obtained from depth images of the user captured by the depth camera.
  • the depth image takes the distance (depth) from the light receiver of the depth camera (eg TOF sensor) to the user's body as the pixel value. That is, the depth image contains the distance information of the captured user's body features.
  • a depth image can be represented as a two-dimensional point cloud, when the two directions of the sensor plane of the depth camera are considered as X, Y directions (eg, the X direction for the horizontal direction and the Y direction for the vertical direction) and the sensor plane is regarded as the X and Y directions.
  • the pixel value of the depth image can be expressed as (x i , y i , z i ), where x i , y i , z i are three-dimensional distance in the direction.
  • the processing circuit 1001 may perform image recognition processing on the depth image obtained by the depth camera to recognize the required body feature. There are many such image recognition processes in the prior art, such as various classification methods, artificial intelligence, etc., which will not be repeated here. Then, the obtaining unit 1002 may obtain corresponding pixel values from the depth image as distance data associated with the body feature.
  • the handheld device may utilize an RGB camera and a depth camera to simultaneously obtain an image of the user, and the RGB image obtained by the RGB camera and the depth image obtained by the depth camera may be aligned with each other.
  • an RGB camera and a depth camera are integrated, such a camera can obtain both the user's RGB value and the distance as a pixel value.
  • the processing circuit 1001 can perform image recognition processing on the RGB image to recognize the required body features. Then, based on the correspondence between the pixel points of the depth image and the RGB image, the obtaining unit 1002 may obtain the corresponding pixel value from the depth image as the distance data associated with the body feature.
  • the determination unit 1003 may be configured to determine the hand of the user currently operating the handheld device based on the distance data acquired by the acquisition unit 1002 (ie, perform step S1002 in FIG. 3B ). Specifically, based on distance data associated with one or more physical characteristics of the user, the determination unit 1003 may first determine the orientation of the handheld device relative to the user. Several examples of determining the orientation of a handheld device relative to a user are presented here.
  • the determination unit 1003 may calculate the azimuth of the body feature relative to the handheld device based on the distance data associated with the body feature.
  • 4A and 4B show front and top views, respectively, of the orientation of the nose tip relative to the handheld device when the user holds the handheld device with the right and left hands. It should be understood that although FIGS. 4A and 4B use the tip of the nose as an example of a body feature, the present disclosure is not limited thereto, and the body feature may also be a forehead, a mouth, a chin, and the like.
  • the azimuth of the user's nose tip can be calculated as Where x is the distance of the nose tip in the X direction (eg horizontal direction), z is the distance of the nose tip in the Z direction (eg the normal direction of the sensor plane, that is, the depth direction).
  • x is the distance of the nose tip in the X direction (eg horizontal direction)
  • z is the distance of the nose tip in the Z direction (eg the normal direction of the sensor plane, that is, the depth direction).
  • the predetermined threshold here is not limited to 0, but may be appropriately set in consideration of tolerances, for example, if the calculated azimuth angle ⁇ is greater than 5°, 10° or other thresholds, it is determined that the handheld device is located at the user , and if the calculated azimuth angle ⁇ is less than -5°, -10°, or other thresholds, it is determined that the handheld device is located to the right of the user.
  • the judging unit 1003 may consider more than one body feature, but may consider two or more body features, such as eyes, shoulders, and the like.
  • 5A and 5B illustrate an example of calculating the azimuth angle for the eye.
  • the azimuth angle ⁇ R of the right eye should be smaller than the azimuth angle ⁇ L of the left eye; and when the handheld device is located to the user's left, the azimuth angle of the right eye ⁇ R should be greater than the azimuth angle ⁇ L of the left eye. Therefore, by comparing the azimuth angles of the two eyes, it can be determined whether the handheld device is located to the left or right of the user.
  • the direction in which the handheld device is located relative to the user always corresponds to a body feature (eg eyes) having a smaller azimuth.
  • a body feature eg eyes
  • the handheld device is located to the right of the user, which corresponds to the right eye with a smaller azimuth
  • the handheld device is located to the left of the user, which corresponds to a lower azimuth Small azimuth left eye. This means that using the hand has an ipsilateral relationship to body features with smaller azimuths.
  • the determination unit 1004 may calculate the distance between the body feature and the handheld device based on distance data associated with a pair of body features.
  • 6A and 6B illustrate an example of calculating distances for eyes.
  • the physical features considered here may not be limited to the eyes, but may be a pair of any physical features that are symmetrical from left to right, such as shoulders.
  • the distance of either eye can be calculated as Where x is the distance of the eye in the X direction (eg horizontal direction), z is the distance of the eye in the Z direction (eg the normal direction of the sensor plane, that is, the depth direction).
  • the distance dR for the right eye should be smaller than the distance dL for the left eye, and when the handheld device is located to the left of the user, the distance dR for the right eye should be greater than the left eye distance dR Eye distance dL. Therefore, by comparing the distance between the two eyes, it can be determined whether the handheld device is located to the left or right of the user. Likewise, it can be found that the direction in which the handheld device is located relative to the user always corresponds to the eye with a smaller distance. This means that using the hand has an ipsilateral relationship to the body feature with a smaller distance.
  • a machine learning approach can be employed, where a model, such as a neural network model, can be constructed using depth images or distance data as an input training set and the orientation of the handheld device relative to the user as an output training set and, in use, judge
  • the unit 1003 may input the corresponding depth image or the distance data obtained by the obtaining unit 1002 to the model to obtain the prediction output of the model.
  • the determination unit 1003 can determine which hand the user is using to hold the handheld device. Typically, when the user holds the handheld device with the left hand, the device is located in front of the user to the left, and when the user holds the handheld device in the right hand, the device is located in the front right of the user. Thus, the determination unit 1003 can determine the hand of the user to operate the handheld device according to the direction of the handheld device relative to the user.
  • the judgment unit 1003 may obtain an erroneous judgment result.
  • the processing circuit 1001 may also correct the judgment result of the judgment unit 1003 using sensing data from additional sensors other than the depth camera.
  • the processing circuit 1001 can use the sensing data of the acceleration sensor to detect the motion trajectory of the handheld device. If the handheld device moves from left to right, crosses the front of the user and is located to the right of the user, then The processing circuit 1001 can correct the judgment result of the judgment unit 1003 to use the left hand in combination with the detected motion trajectory, and vice versa.
  • the processing circuit 1001 can use the sensing data of the gyro sensor to detect the posture of the handheld device, and integrate ergonomics and face position to comprehensively judge the user's hand.
  • the processing circuit 1001 may control to display the judgment result of the judgment unit 1003, and prompt the user to give confirmation. For example, “Detected that the current using hand is a left hand?" may be displayed on the screen, and the user may confirm by clicking "Yes” or "No".
  • the presentation unit 1004 of the processing circuit 1001 may present a UI suitable for the user's hand operation (ie, perform step S1003 in FIG. 3B ).
  • the presenting unit 1004 may present the UI components requiring user operation as being close to the left side of the screen so as to be within reach of the fingers of the user's left hand.
  • the presenting unit 1004 may present the UI components requiring user operation as being close to the right side of the screen so as to be within reach of the fingers of the user's right hand.
  • the hand-use judgment method of the present disclosure can only use the depth camera commonly equipped on the device to infer which hand the user is holding and operating the handheld device with, without the need to equip a specific sensor.
  • the hand judgment method of the present disclosure is less affected by lighting conditions, and can ensure a wide range of application scenarios and high accuracy.
  • the method process described above can be performed in many occasions to provide convenience for the user to operate the UI.
  • the technology of the present disclosure can be performed, for example, when the user performs facial recognition or presses the unlock key, the depth camera is activated to capture a depth image, and the hand is determined based on the captured depth image, so as to UI suitable for user operation is directly presented on the unlocked screen.
  • use of the present disclosure may be initiated when a reversal change in the orientation of the handheld device relative to the user is detected by a camera or other sensor, eg, moving from the user's left to right or right to left Hand judgment, the handheld device can present a query to the user whether to change the hand on the screen, if the response received from the user is to confirm the change of the hand, then present the changed UI, otherwise, if the user responds that the hand has not changed, the UI No need to change either.
  • An electronic device for a handheld device comprising: a processing circuit configured to: obtain distance data associated with one or more physical characteristics of a user of the handheld device; based on the distance data determining the direction of the handheld device relative to the user; based on the determined direction, judging the hand of the user currently operating the handheld device; and presenting a hand suitable for the user according to the judgment result
  • the user interface (UI) of the operation comprising: a processing circuit configured to: obtain distance data associated with one or more physical characteristics of a user of the handheld device; based on the distance data determining the direction of the handheld device relative to the user; based on the determined direction, judging the hand of the user currently operating the handheld device; and presenting a hand suitable for the user according to the judgment result
  • the user interface (UI) of the operation comprising: obtain distance data associated with one or more physical characteristics of a user of the handheld device; based on the distance data determining the direction of the handheld device relative to the user; based on the determined direction, judging the hand of the user currently operating the
  • processing circuit is further configured to: identify the one or more body features based on a depth image about the user obtained by a depth camera, and obtain the distance data.
  • processing circuit is further configured to: identify the one or more body features based on an RGB image of the user obtained by an RGB camera; and Distance data associated with the identified one or more body features is obtained from the depth image obtained by the depth camera.
  • processing circuit is further configured to: based on the distance data, calculate an azimuth angle of one or more body features of the user relative to the handheld device , and determine the orientation of the handheld device relative to the user based on the azimuth.
  • processing circuit is further configured to: based on the distance data, calculate and compare a pair of left-right symmetrical body features of the user with the hand-held device. and determine the orientation of the handheld device relative to the user based on the distance.
  • processing circuit is further configured to: when the direction of the handheld device relative to the user is reversed and changed, present to the user whether to change the usage A hand query; receiving a response to the query from the user; determining whether to change the user interface (UI) according to the received response.
  • UI user interface
  • the depth camera is one of the following: a time-of-flight (TOF) camera, a structured light camera, and a binocular stereo vision camera.
  • TOF time-of-flight
  • a handheld device comprising: a depth camera for obtaining a depth image about a user of the handheld device; and a processing circuit configured to: based on the depth image, obtain a depth image related to the user or distance data associated with multiple body features; determine the direction of the handheld device relative to the user based on the distance data; determine the hand of the user currently operating the handheld device based on the determined direction ; present a user interface (UI) suitable for the user's hand-operated operation according to the judgment result.
  • UI user interface
  • processing circuit is further configured to: based on the depth image, identify the one or more body features, and obtain the distance data.
  • the handheld device further comprising an RGB camera for capturing an RGB image
  • the processing circuit is further configured to: based on the RGB image about the user captured by the RGB camera, identify the one or more body features; and obtaining distance data associated with the identified one or more body features from a depth image obtained by the depth camera.
  • the handheld device wherein the processing circuit is further configured to: based on the distance data, calculate and compare a pair of left-right symmetrical body features of the user with the handheld device distance between and determine the orientation of the handheld device relative to the user based on the distance.
  • the depth camera is one of the following: a time-of-flight (TOF) camera, a structured light camera, and a binocular stereo camera.
  • TOF time-of-flight
  • a method for a handheld device comprising: obtaining distance data associated with one or more physical characteristics of a user of the handheld device; the direction of the user; based on the determined direction, judging the hand of the user currently operating the handheld device; and presenting a user interface (UI) suitable for the hand operation of the user according to the judgment result.
  • UI user interface
  • a non-transitory computer-readable storage medium storing executable instructions which, when executed, implement the method according to 15).
  • a plurality of functions included in one unit in the above embodiments may be implemented by separate devices.
  • multiple functions implemented by multiple units in the above embodiments may be implemented by separate devices, respectively.
  • one of the above functions may be implemented by multiple units. Needless to say, such a configuration is included in the technical scope of the present disclosure.
  • the steps described in the flowcharts include not only processing performed in time series in the stated order, but also processing performed in parallel or individually rather than necessarily in time series. Furthermore, even in the steps processed in time series, needless to say, the order can be appropriately changed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Environmental & Geological Engineering (AREA)
  • Multimedia (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to an electronic device, a method and a storage medium. An electronic device for a hand-held device, comprising a processing circuit, the processing circuit being configured to: acquire distance data associated with one or more physical features of a user of the hand-held device; determine a direction of the hand-held device relative to the user on the basis of the distance data, so as to determine the hand of the user currently operating the hand-held device; and present, according to a determination result, a user interface (UI) suitable for the operation by the hand of the user.

Description

电子设备、方法和存储介质Electronic device, method and storage medium
本申请要求2020年11月4日提交的标题为“电子设备、方法和存储介质”的中国专利申请202011216016.8的优先权,该中国专利申请的公开内容在此作为整体并入于此。This application claims priority to Chinese Patent Application No. 202011216016.8, filed on Nov. 4, 2020, entitled "Electronic Device, Method, and Storage Medium," the disclosure of which is hereby incorporated herein in its entirety.
技术领域technical field
本公开总体上涉及诸如智能手机之类的手持式用户设备,更具体而言,涉及用户正在操作手持式设备的使用手的判断方法。The present disclosure generally relates to handheld user equipment such as smart phones, and more particularly, to a method of determining using hands of a user that is operating the handheld device.
背景技术Background technique
近年来,诸如智能手机之类的手持式用户设备日益普及。这种手持式设备通常配备触摸屏幕,除了视觉呈现的目的以外,还可以借助于在屏幕上显示的包含虚拟操作部件(如虚拟按键)的UI来提供与用户的交互,从而减少了实体按键的使用。然而,为了提供更好的操作和视觉体验,手持式设备的屏幕尺寸越来越大,这无形中增加了用户在单手握住设备的同时操作设备的难度。虽然可以通过优化UI设计来让用户的手能够方便地完成屏幕上的每个操作,但是这需要判断用户当前操作设备的是左手还是右手,以便将UI呈现为适于操作的形式。In recent years, handheld user devices such as smartphones have become increasingly popular. Such handheld devices are usually equipped with a touch screen, which, in addition to the purpose of visual presentation, can also provide interaction with the user by means of a UI containing virtual operating components (such as virtual keys) displayed on the screen, thereby reducing the need for physical keys. use. However, in order to provide better operation and visual experience, the screen size of handheld devices is getting larger and larger, which virtually increases the difficulty for users to operate the device while holding the device with one hand. Although it is possible to optimize the UI design so that the user's hand can easily complete each operation on the screen, it is necessary to determine whether the user is currently operating the device with the left hand or the right hand, so as to present the UI in a form suitable for operation.
目前已经提出了多种判断用户的使用手的方法。例如,专利文献1(CN105468269A)公开了利用手机上安装的接近传感器201-209和201’-209’来检测用户的哪个手正在握住手机,如图1A中所示。专利文献2(JP2012023554A)公开了利用陀螺仪或加速度计等传感器来检测从而判断用户的哪个手正在握住手机,如图1B中所示。但是陀螺仪或加速度计受运动状态或重力(垂直方向)影响,当用户走或躺时这种判断方法的效果不好,因此其应用场景受限。此外,专利文献3(CN108958603A)公开了利用普通的RGB相机来捕获用户的面部,并根据左脸和右脸的不同面积来判断用户的使用手。但是RGB传感器受亮度条件限制,并且精度不高,因为面积比率在不 同的头部姿势下有歧义。Various methods for judging the user's hand have been proposed. For example, Patent Document 1 (CN105468269A) discloses using proximity sensors 201-209 and 201'-209' mounted on the mobile phone to detect which hand of the user is holding the mobile phone, as shown in Fig. 1A. Patent Document 2 (JP2012023554A) discloses that a sensor such as a gyroscope or an accelerometer is used to detect and judge which hand of the user is holding the mobile phone, as shown in FIG. 1B . However, the gyroscope or accelerometer is affected by the motion state or gravity (vertical direction), and the effect of this judgment method is not good when the user is walking or lying down, so its application scenarios are limited. In addition, Patent Document 3 (CN108958603A) discloses that a common RGB camera is used to capture the user's face, and the user's hand is judged according to the different areas of the left and right faces. But RGB sensors are limited by luminance conditions and are not accurate because the area ratios are ambiguous under different head poses.
因此,存在应用范围更广且高度精确地判断用户正在用哪只手操作设备的需求。Therefore, there is a need for a wider range of applications and a high degree of accuracy in determining which hand a user is operating a device with.
发明内容SUMMARY OF THE INVENTION
在此部分给出了关于本公开的简要概述,以便提供关于本公开的一些方面的基本理解。但是,应当理解,这个概述并不是关于本公开的穷举性概述。它并不是意图用来确定本公开的关键性部分或重要部分,也不是意图用来限定本公开的范围。其目的仅仅是以简化的形式给出关于本公开的某些概念,以此作为稍后给出的更详细描述的前序。This section presents a brief summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. It should be understood, however, that this summary is not an exhaustive overview of the present disclosure. It is not intended to identify key or critical parts of the disclosure nor to limit the scope of the disclosure. Its sole purpose is to present some concepts related to the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
根据本公开的一个方面,提供了一种用于手持式设备的电子设备,包括处理电路,处理电路被配置为:获取与所述手持式设备的用户的一个或多个身体特征相关联的距离数据;基于所述距离数据确定所述手持式设备相对于所述用户的方向;基于所确定的方法判断所述用户当前操作所述手持式设备的使用手;以及根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。According to one aspect of the present disclosure, there is provided an electronic device for a handheld device including processing circuitry configured to obtain a distance associated with one or more physical characteristics of a user of the handheld device determining the direction of the handheld device relative to the user based on the distance data; determining the hand of the user currently operating the handheld device based on the determined method; The user's hand-operated user interface (UI).
根据本公开的另一个方面,提供了一种手持式设备,包括:深度相机,用于获得关于所述手持式设备的用户的深度图像;以及处理电路,被配置为:基于所述深度图像,获取与所述用户的一个或多个身体特征相关联的距离数据;基于所述距离数据确定所述手持式设备相对于所述用户的方向;基于所确定的方向判断所述用户当前操作所述手持式设备的使用手;根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。According to another aspect of the present disclosure, there is provided a handheld device comprising: a depth camera for obtaining a depth image about a user of the handheld device; and a processing circuit configured to: based on the depth image, obtaining distance data associated with one or more physical features of the user; determining a direction of the handheld device relative to the user based on the distance data; determining that the user is currently operating the user based on the determined direction The user's hand of the handheld device; according to the judgment result, a user interface (UI) suitable for the user's operation by the user's hand is presented.
根据本公开的另一个方面,提供了一种用于手持式设备的方法,包括:获取与所述手持式设备的用户的一个或多个身体特征相关联的距离数据;基于所述距离数据确定所述手持式设备相对于所述用户的方向;基于所确定的方向判断所述用户当前操作所述手持式设备的使用手;以及根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。According to another aspect of the present disclosure, there is provided a method for a handheld device, comprising: obtaining distance data associated with one or more physical characteristics of a user of the handheld device; determining based on the distance data The direction of the handheld device relative to the user; judging the hand of the user currently operating the handheld device based on the determined direction; User Interface (UI).
根据本公开的另一个方面,提供了一种存储有可执行指令的非暂时性计算机可读存储介质,所述可执行指令当被执行时实现上面所述的方法。According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing executable instructions that, when executed, implement the above-described method.
通过应用本公开的一个或多个方面,可以方便、精确地判断用户操作手持式设备的使用手并呈现适于用户操作的UI。By applying one or more aspects of the present disclosure, it is possible to conveniently and accurately determine the user's hand for operating the handheld device and present a UI suitable for the user's operation.
附图说明Description of drawings
本公开可以通过参考下文中结合附图所给出的详细描述而得到更好的理解,其中在所有附图中使用了相同或相似的附图标记来表示相同或者相似的要素。所有附图连同下面的详细说明一起包含在本说明书中并形成说明书的一部分,用来进一步举例说明本公开的实施例和解释本公开的原理和优点。其中:The present disclosure may be better understood by reference to the following detailed description taken in conjunction with the accompanying drawings, wherein the same or like reference numerals are used throughout the drawings to refer to the same or like elements. All accompanying drawings, which are incorporated in and constitute a part of this specification together with the following detailed description, serve to further illustrate embodiments of the disclosure and to explain the principles and advantages of the disclosure. in:
图1A-1B示出了现有技术中判断使用手的示意图;1A-1B show schematic diagrams of judging the use of hands in the prior art;
图2示出作为手持式设备的示例的智能手机的硬件配置;FIG. 2 shows the hardware configuration of a smartphone as an example of a handheld device;
图3A示出了根据本公开的电子设备的框图;3A shows a block diagram of an electronic device according to the present disclosure;
图3B示出了根据本公开的方法的流程图;3B shows a flowchart of a method according to the present disclosure;
图4A-4B示出了确定手持式设备相对于用户的方向的示例;4A-4B illustrate an example of determining the orientation of a handheld device relative to a user;
图5A-5B示出了确定手持式设备相对于用户的方向的另一示例;5A-5B illustrate another example of determining the orientation of a handheld device relative to a user;
图6A-6B示出了确定手持式设备相对于用户的方向的另一示例。6A-6B illustrate another example of determining the orientation of a handheld device relative to a user.
具体实施方式Detailed ways
在下文中将参照附图来详细描述本公开的各种示例性实施例。为了清楚和简明起见,在本说明书中并未描述实施例的所有实现方式。然而应注意,在实现本公开的实施例时可以根据特定需求做出很多特定于实现方式的设置。Various exemplary embodiments of the present disclosure will hereinafter be described in detail with reference to the accompanying drawings. In the interest of clarity and conciseness, not all implementations of embodiments are described in this specification. It should be noted, however, that many implementation-specific settings may be made according to particular needs in implementing embodiments of the present disclosure.
此外,还应注意,为了避免因不必要的细节而模糊了本公开,在附图中仅仅示出了与本公开的技术方案密切相关的处理步骤和/或设备结构。以下对于示例性实施例的描述仅仅是说明性的,不意在作为对本公开及其应用的任何限制。In addition, it should also be noted that, in order to avoid obscuring the present disclosure due to unnecessary details, only processing steps and/or device structures closely related to the technical solutions of the present disclosure are shown in the drawings. The following description of exemplary embodiments is illustrative only and is not intended to limit the present disclosure and its applications in any way.
图2是示出作为本公开的手持式设备的示例的智能手机1600的硬件配置的框图。应理解,虽然本公开以智能手机为例进行描述,但是可以应用本公开的技术内容的手 持式设备并不限于智能手机,而是可以被实现为各种类型的移动终端,诸如平板电脑、个人计算机(PC)、掌上电脑、智能助理、便携式游戏终端、便携式数字摄像装置等等。FIG. 2 is a block diagram showing a hardware configuration of a smartphone 1600 as an example of the handheld device of the present disclosure. It should be understood that although the present disclosure is described by taking a smartphone as an example, the handheld device to which the technical contents of the present disclosure can be applied is not limited to a smartphone, but can be implemented as various types of mobile terminals, such as tablet computers, personal Computers (PCs), Palmtops, Smart Assistants, Portable Game Terminals, Portable Digital Cameras, and the like.
智能手机1600包括处理器1601、存储器1602、存储装置1603、外部连接接口1604、RGB相机1605、深度相机1606、传感器1607、麦克风1608、输入装置1609、显示装置1610、扬声器1611、无线通信子系统1612、总线1617、电池1618等,这些部件通过总线1617彼此连接。电池1618经由馈线(图2中未示出)为智能手机1600的各个部件提供电力。Smartphone 1600 includes processor 1601, memory 1602, storage device 1603, external connection interface 1604, RGB camera 1605, depth camera 1606, sensor 1607, microphone 1608, input device 1609, display device 1610, speaker 1611, wireless communication subsystem 1612 , bus 1617 , battery 1618 , etc., these components are connected to each other through bus 1617 . The battery 1618 provides power to the various components of the smartphone 1600 via a feeder line (not shown in FIG. 2).
处理器1601可以实现为例如CPU或片上系统(SoC),并且总体控制智能手机1600的功能。处理器1601可以包括在计算系统中执行功能的数字电路系统、模拟电路系统或混合信号(模拟信号和数字信号的组合)电路系统的各种实现,诸如集成电路(IC)、专用集成电路(ASIC)之类的电路、单独处理器核心的部分或电路、整个处理器核心、单独的处理器、诸如现场可编程门阵列(FPGA)的可编程硬件设备、和/或包括多个处理器的系统。The processor 1601 may be implemented as, for example, a CPU or a system on a chip (SoC), and generally controls the functions of the smartphone 1600 . The processor 1601 may include various implementations of digital circuitry, analog circuitry, or mixed-signal (combination of analog and digital) circuitry, such as integrated circuits (ICs), application specific integrated circuits (ASICs), to perform functions in a computing system ), portions or circuits of a separate processor core, an entire processor core, a separate processor, a programmable hardware device such as a field programmable gate array (FPGA), and/or a system including multiple processors .
存储器1602用于存储数据和由处理器1601执行的程序,例如可以是易失性存储器和/或非易失性存储器,包括但不限于随机存储存储器(RAM)、动态随机存储存储器(DRAM)、静态随机存取存储器(SRAM)、只读存储器(ROM)、闪存存储器等。存储装置1603作为存储器1602的补充,可以包括诸如半导体存储器和硬盘之类的存储介质。外部连接接口1604是用于将外部装置(诸如存储卡和通用串行总线(USB)装置)连接至智能手机1600的接口。The memory 1602 is used to store data and programs executed by the processor 1601, and may be, for example, volatile memory and/or non-volatile memory, including but not limited to random access memory (RAM), dynamic random access memory (DRAM), Static random access memory (SRAM), read only memory (ROM), flash memory, etc. The storage device 1603 may include a storage medium such as a semiconductor memory and a hard disk in addition to the memory 1602 . The external connection interface 1604 is an interface for connecting external devices such as memory cards and Universal Serial Bus (USB) devices to the smartphone 1600 .
输入装置1609包括例如键盘、小键盘、按键或开关,用于接收从用户输入的操作或信息。显示装置1610包括屏幕(诸如液晶显示器(LCD)和有机发光二极管(OLED)显示器),并且显示智能手机1600的输出图像,诸如包含虚拟操作部件的UI。典型地,显示装置1610可以被实现为触摸屏幕,其包括被配置为检测显示装置1610的屏幕上的触摸的触摸传感器,由此,触摸屏幕既是显示装置,又可以充当输入装置。 Input device 1609 includes, for example, a keyboard, keypad, keys, or switches for receiving operations or information input from a user. The display device 1610 includes a screen such as a liquid crystal display (LCD) and an organic light emitting diode (OLED) display, and displays an output image of the smartphone 1600 such as a UI including virtual operation parts. Typically, the display device 1610 may be implemented as a touch screen that includes a touch sensor configured to detect a touch on the screen of the display device 1610, whereby the touch screen acts as both a display device and an input device.
麦克风1608将输入到智能手机1600的声音转换为音频信号。利用语音识别技术,来自用户的指令可以以语音的形式通过麦克风1608输入到智能手机1600,由此麦克风 1608也可以充当输入装置。扬声器1611将从智能手机1600输出的音频信号转换为声音。The microphone 1608 converts the sound input to the smartphone 1600 into an audio signal. Using voice recognition technology, instructions from the user can be input into the smartphone 1600 in the form of speech through the microphone 1608, whereby the microphone 1608 can also act as an input device. The speaker 1611 converts the audio signal output from the smartphone 1600 into sound.
无线通信子系统1612用于执行无线通信。无线通信子系统1612可以支持任何蜂窝通信方案(诸如4G LTE或5G NR等等),或者另外类型的无线通信方案,诸如短距离无线通信方案、近场通信方案和无线局域网(LAN)方案。无线通信子系统1612可以包括例如BB处理器和RF电路、天线开关、天线等。The wireless communication subsystem 1612 is used to perform wireless communication. The wireless communication subsystem 1612 may support any cellular communication scheme (such as 4G LTE or 5G NR, etc.), or other types of wireless communication schemes, such as short-range wireless communication schemes, near field communication schemes, and wireless local area network (LAN) schemes. Wireless communication subsystem 1612 may include, for example, a BB processor and RF circuitry, antenna switches, antennas, and the like.
智能手机1600还包括用于捕获外界图像的RGB相机1605。RGB相机1605包括光学部件和图像传感器(诸如电荷耦合器件(CCD)传感器和互补金属氧化物半导体(CMOS)传感器)。经过光学部件的光被图像传感器捕获和光电转换,并由例如处理器1601处理以生成图像。智能手机1600可以包括不止一个RGB相机1605,例如前置或后置的多个相机。RGB相机1605可以由广角相机模块、超广角相机模块、长焦相机模块等组成,以提供优秀的拍摄效果。The smartphone 1600 also includes an RGB camera 1605 for capturing images of the outside world. The RGB camera 1605 includes optical components and image sensors such as charge coupled device (CCD) sensors and complementary metal oxide semiconductor (CMOS) sensors. The light passing through the optics is captured and photoelectrically converted by an image sensor, and processed by, for example, processor 1601 to generate an image. Smartphone 1600 may include more than one RGB camera 1605, such as multiple cameras on the front or rear. The RGB camera 1605 can be composed of a wide-angle camera module, an ultra-wide-angle camera module, a telephoto camera module, etc., to provide excellent shooting effects.
除了传统的RGB相机1605以外,智能手机1600还可以包括用于获得深度信息的深度相机1606。深度相机有时也被称为3D相机,顾名思义,就是通过该相机能检测出拍摄空间的景深距离,越来越普及地用于诸如对象识别、行为识别、场景建模等应用场景。相比较传统的相机,深度相机在功能上添加了一个深度测量,从而更方便准确的感知周围的环境及变化。取决于所利用的技术,深度相机1606可以包括飞行时间(Time of Flight,TOF)相机、结构光(Structured Light)相机、双目立体视觉(Binocular Stereo Vision)相机等,但是不限于此。下面简单介绍这几种深度相机。In addition to the conventional RGB camera 1605, the smartphone 1600 may also include a depth camera 1606 for obtaining depth information. Depth cameras are sometimes referred to as 3D cameras. As the name suggests, depth cameras are used to detect the depth of field distance in the shooting space. They are increasingly used in applications such as object recognition, behavior recognition, and scene modeling. Compared with the traditional camera, the depth camera adds a depth measurement function, which makes it more convenient and accurate to perceive the surrounding environment and changes. Depending on the technology utilized, the depth camera 1606 may include, but is not limited to, a Time of Flight (TOF) camera, a Structured Light (Structured Light) camera, a Binocular Stereo Vision camera, and the like. The following is a brief introduction to these depth cameras.
TOF相机采用主动探测方式,通过测量光的飞行时间来取得距离。具体而言,光发射器(例如LED或激光二极管)向目标连续发射调制过的光脉冲,一般是不可见的红外光,光脉冲遇到物体反射后,由光接收器(例如,特制的CMOS传感器)接收反射光。因为光速的原因,通过直接测量光的飞行时间实际不可行,一般通过检测用一定手段调制后的光波的相位偏移来实现。TOF技术根据调制方法的不同,一般可以分为两种:脉冲调制(Pulsed Modulation)和连续波调制(Continuous Wave Modulation)。因为已知光速和调制光的波长,运算单元利用发射的脉冲光和接收的脉冲光之间的相位偏移,能快速准确计算出到物体的深度距离。TOF相机可以同时得到整幅图像的深度信息,即,二维的深度点云信息,图像上的每个点的值代表着相机和 物体之间的距离的值(即,深度值),而不像RGB相机是光强度值。TOF相机的优点主要有:1)检测距离远。在激光能量够的情况下可达几十米。2)受环境光干扰比较小。但是TOF技术也有一些显而易见的问题,例如对设备要求高,特别是时间测量模块;计算资源消耗大,在检测相位偏移时需要多次采样积分,运算量大;限于资源消耗和滤波,帧率和分辨率都没办法做到较高。TOF cameras use active detection methods to obtain distances by measuring the time of flight of light. Specifically, a light transmitter (such as an LED or laser diode) continuously emits modulated light pulses, generally invisible infrared light, to the target. sensor) to receive the reflected light. Because of the speed of light, it is not practical to measure the time of flight of light directly, and it is generally realized by detecting the phase shift of the light wave modulated by a certain means. TOF technology can be generally divided into two types according to different modulation methods: pulsed modulation (Pulsed Modulation) and continuous wave modulation (Continuous Wave Modulation). Because the speed of light and the wavelength of the modulated light are known, the arithmetic unit can quickly and accurately calculate the depth distance to the object by using the phase shift between the emitted pulsed light and the received pulsed light. The TOF camera can simultaneously obtain the depth information of the entire image, that is, the two-dimensional depth point cloud information, and the value of each point on the image represents the value of the distance between the camera and the object (ie, the depth value), instead of Like RGB cameras are light intensity values. The main advantages of TOF cameras are: 1) The detection distance is long. In the case of enough laser energy, it can reach several tens of meters. 2) Less interference from ambient light. However, TOF technology also has some obvious problems, such as high requirements for equipment, especially the time measurement module; large computational resource consumption, multiple sampling and integration are required to detect phase offset, and the computational load is large; limited to resource consumption and filtering, frame rate There is no way to achieve a higher resolution.
结构光相机同样采用主动探测方式,其原理是通过光发射器(例如近红外激光器)将具有一定结构特征(诸如经过编码的图像或伪随机散斑光点)的光线投射到被拍摄物体上,再由专门的光接收器捕获发射回来的结构光图案。这种具备一定结构的光线会因被拍摄物体的不同深度区域而被捕获到不同的图像相位信息,然后通过运算单元根据三角测量原理,将这种结构的变化换算成深度信息。和TOF相机相比,结构光相机的运算量小、功耗低,在近距离范围内精度更高,所以在人脸识别,手势识别方面极具优势,但是容易受环境光干扰、室外体验差,并且随检测距离增加,精度会变差。Structured light cameras also use active detection methods. The principle is to project light with certain structural features (such as encoded images or pseudo-random speckle spots) onto the object to be photographed through a light emitter (such as a near-infrared laser). The emitted structured light pattern is then captured by a special light receiver. This kind of light with a certain structure will capture different image phase information due to different depth regions of the photographed object, and then the change of this structure is converted into depth information by the arithmetic unit according to the principle of triangulation. Compared with TOF cameras, structured light cameras have less computational complexity, lower power consumption, and higher accuracy at close range, so they have great advantages in face recognition and gesture recognition, but they are easily disturbed by ambient light and have poor outdoor experience. , and as the detection distance increases, the accuracy will deteriorate.
双目立体视觉相机采用被动探测方式。双目立体视觉是机器视觉的一种重要形式,其利用成像设备(例如传统的RGB相机)从不同的位置获取被测物体的两幅图像,基于视差原理计算图像对应点间的位置偏差,从而获取物体三维几何信息的方法。双目立体视觉相机对硬件要求低,普通的RGB相机即可,并且室内外都适用,只要光线合适。但是其缺点也是非常明显,例如对环境光照非常敏感、不适用单调缺乏纹理的场景、计算复杂度高、基线限制了测量范围,等等。The binocular stereo camera adopts passive detection method. Binocular stereo vision is an important form of machine vision. It uses imaging devices (such as traditional RGB cameras) to obtain two images of the object to be measured from different positions, and calculates the positional deviation between the corresponding points of the images based on the principle of parallax. A method for obtaining 3D geometric information of an object. Binocular stereo vision cameras have low hardware requirements, ordinary RGB cameras can be used, and are suitable for indoor and outdoor, as long as the light is suitable. However, its shortcomings are also very obvious, such as being very sensitive to ambient lighting, not suitable for scenes with monotonous lack of texture, high computational complexity, and the baseline limits the measurement range, and so on.
智能手机1600还可以包括其它的传感器1607,诸如光线传感器、重力传感器、接近传感器、指纹传感器等等。除此之外,智能手机1600通常还会配备陀螺仪传感器、加速度传感器等以便于检测其运动状态或姿态。 Smartphone 1600 may also include other sensors 1607, such as light sensors, gravity sensors, proximity sensors, fingerprint sensors, and the like. In addition, the smart phone 1600 is usually equipped with a gyro sensor, an acceleration sensor, etc. in order to detect its motion state or attitude.
应理解,以上仅示意性地描述了智能手机作为手持式设备的示例及其代表性部件,并不意味着这些部件都是根据本公开的手持式设备所必需的。It should be understood that the above merely schematically describes a smartphone as an example of a handheld device and its representative components, and does not mean that these components are all necessary for a handheld device according to the present disclosure.
根据本公开的实施例,手持式设备可以根据用户正在用哪只手操作设备来自适应地呈现UI。下面将结合图3A和3B来描述根据本公开的用于自适应UI呈现的电子设备及其执行的方法。According to embodiments of the present disclosure, the handheld device can adaptively present the UI according to which hand the user is operating the device with. An electronic device for adaptive UI presentation and a method for performing the same according to the present disclosure will be described below in conjunction with FIGS. 3A and 3B .
图3A示出了电子设备1000的框图。电子设备1000包括处理电路1001以及潜 在的其它电路。处理电路1001可以例如被实现为诸如上面所述的处理器1601之类的处理器。如图3A中所示,处理电路1001包括获取单元1002、判断单元1003和呈现单元1004,并且可以被配置为执行图3B中所示的方法。FIG. 3A shows a block diagram of electronic device 1000 . Electronic device 1000 includes processing circuit 1001 and potentially other circuits. The processing circuit 1001 may, for example, be implemented as a processor such as the processor 1601 described above. As shown in FIG. 3A, the processing circuit 1001 includes an acquisition unit 1002, a determination unit 1003, and a presentation unit 1004, and may be configured to perform the method shown in FIG. 3B.
处理电路1001的获取单元1002被配置为获取与手持式设备的用户的一个或多个身体特征相关联的距离数据(即,执行图3B中的步骤S1001)。The acquisition unit 1002 of the processing circuit 1001 is configured to acquire distance data associated with one or more physical characteristics of the user of the handheld device (ie, perform step S1001 in FIG. 3B ).
根据本公开的实施例,用户的身体特征可以包括各种面部特征或其它身体特征。在一个示例中,用户的身体特征包括前额、鼻尖、嘴巴、下巴等特征。在另一个示例中,用户的身体特征包括一对左右对称的特征,诸如眼睛、肩膀等。获取单元1002要针对哪些身体特征获取距离数据可以取决于具体的判断方法,下面将详细描述。According to embodiments of the present disclosure, the physical features of the user may include various facial features or other physical features. In one example, the physical features of the user include features such as forehead, nose tip, mouth, chin, and the like. In another example, the user's physical features include a pair of left-right symmetrical features, such as eyes, shoulders, and the like. For which body features the acquiring unit 1002 acquires the distance data may depend on the specific determination method, which will be described in detail below.
与用户的身体特征相关联的距离数据可以从深度相机对用户拍摄的深度图像获取。深度图像将从深度相机的光接收器(例如TOF传感器)到用户身体的距离(深度)作为像素值。也就是说,深度图像包含所拍摄到的用户身体特征的距离信息。典型地,深度图像可以被表示为二维的点云,当将深度相机的传感器平面的两个方向看作X、Y方向(例如,水平方向为X方向、垂直方向为Y方向)并且将传感器平面的法线方向(深度方向)看作Z方向从而建立坐标系时,深度图像的像素值可以表示为(x i,y i,z i),其中x i、y i、z i分别是三维方向上的距离。 The distance data associated with the physical features of the user may be obtained from depth images of the user captured by the depth camera. The depth image takes the distance (depth) from the light receiver of the depth camera (eg TOF sensor) to the user's body as the pixel value. That is, the depth image contains the distance information of the captured user's body features. Typically, a depth image can be represented as a two-dimensional point cloud, when the two directions of the sensor plane of the depth camera are considered as X, Y directions (eg, the X direction for the horizontal direction and the Y direction for the vertical direction) and the sensor plane is regarded as the X and Y directions. When the normal direction (depth direction) of the plane is regarded as the Z direction to establish a coordinate system, the pixel value of the depth image can be expressed as (x i , y i , z i ), where x i , y i , z i are three-dimensional distance in the direction.
在一个示例中,处理电路1001可以对深度相机获得深度图像进行图像识别处理,以识别需要的身体特征。现有技术中已经存在许多这样的图像识别处理,诸如各种分类方法、人工智能等,这里不再赘述。然后,获取单元1002可以从深度图像获得对应的像素值作为与身体特征相关联的距离数据。In one example, the processing circuit 1001 may perform image recognition processing on the depth image obtained by the depth camera to recognize the required body feature. There are many such image recognition processes in the prior art, such as various classification methods, artificial intelligence, etc., which will not be repeated here. Then, the obtaining unit 1002 may obtain corresponding pixel values from the depth image as distance data associated with the body feature.
在另一个示例中,手持式设备可以利用RGB相机和深度相机来同时获得用户的图像,由RGB相机获得的RGB图像和由深度相机获得的深度图像可以是相互对准的。甚至存在RGB相机和深度相机集成的情况,这样的相机可以同时获得用户的RGB值和距离作为像素值。处理电路1001可以对RGB图像进行图像识别处理以识别需要的身体特征。然后基于深度图像和RGB图像的像素点之间的对应关系,获取单元1002可以从深度图像获得对应的像素值作为与身体特征相关联的距离数据。In another example, the handheld device may utilize an RGB camera and a depth camera to simultaneously obtain an image of the user, and the RGB image obtained by the RGB camera and the depth image obtained by the depth camera may be aligned with each other. There are even cases where an RGB camera and a depth camera are integrated, such a camera can obtain both the user's RGB value and the distance as a pixel value. The processing circuit 1001 can perform image recognition processing on the RGB image to recognize the required body features. Then, based on the correspondence between the pixel points of the depth image and the RGB image, the obtaining unit 1002 may obtain the corresponding pixel value from the depth image as the distance data associated with the body feature.
判断单元1003可以被配置为基于获取单元1002获取的距离数据来判断用户当前操作手持式设备的使用手(即,执行图3B中的步骤S1002)。具体而言,基于与用户的一个或多个身体特征相关联的距离数据,判断单元1003可以首先确定手持式设备相对于用户的方向。这里介绍确定手持式设备相对于用户的方向的几个示例。The determination unit 1003 may be configured to determine the hand of the user currently operating the handheld device based on the distance data acquired by the acquisition unit 1002 (ie, perform step S1002 in FIG. 3B ). Specifically, based on distance data associated with one or more physical characteristics of the user, the determination unit 1003 may first determine the orientation of the handheld device relative to the user. Several examples of determining the orientation of a handheld device relative to a user are presented here.
在一个示例中,判断单元1003可以基于与身体特征相关联的距离数据来计算身体特征相对于手持式设备的方位角。图4A和4B分别示出了当用户用右手和左手握住手持式设备时,鼻尖相对于手持式设备的方向的正视图和顶视图。应理解,虽然图4A和4B以鼻尖作为身体特征的示例,但是本公开不限于此,身体特征还可以是前额、嘴巴、下巴等。In one example, the determination unit 1003 may calculate the azimuth of the body feature relative to the handheld device based on the distance data associated with the body feature. 4A and 4B show front and top views, respectively, of the orientation of the nose tip relative to the handheld device when the user holds the handheld device with the right and left hands. It should be understood that although FIGS. 4A and 4B use the tip of the nose as an example of a body feature, the present disclosure is not limited thereto, and the body feature may also be a forehead, a mouth, a chin, and the like.
如图中所示,如果使用深度相机的传感器作为中心建立坐标系,则用户鼻尖的方位角可以计算为
Figure PCTCN2021128556-appb-000001
其中x是鼻尖在X方向(例如水平方向)上的距离,z是鼻尖在Z方向(例如传感器平面的法线方向,也就是深度方向)上的距离。如图4A中所示,当手持式设备位于用户的右方时,由于x为正值,则θ应大于0;而如图4B中所示,当手持式设备位于用户的左方时,由于x为负值,则θ应小于0。因此,通过将计算的方位角与预定阈值(例如0)相比较,就可以确定手持式设备相对于用户的方向。
As shown in the figure, if a coordinate system is established using the depth camera's sensor as the center, the azimuth of the user's nose tip can be calculated as
Figure PCTCN2021128556-appb-000001
Where x is the distance of the nose tip in the X direction (eg horizontal direction), z is the distance of the nose tip in the Z direction (eg the normal direction of the sensor plane, that is, the depth direction). As shown in Figure 4A, when the handheld device is located to the right of the user, since x is a positive value, θ should be greater than 0; while as shown in Figure 4B, when the handheld device is located to the left of the user, due to If x is negative, θ should be less than 0. Thus, by comparing the calculated azimuth to a predetermined threshold (eg, 0), the orientation of the handheld device relative to the user can be determined.
应理解,这里的预定阈值不限于0,而是可以在考虑容差的情况下被适当地设置,例如如果计算的方位角θ大于5°、10°或其它阈值,则确定手持式设备位于用户的左方,而如果计算的方位角θ小于-5°、-10°或其它阈值,则确定手持式设备位于用户的右方。It should be understood that the predetermined threshold here is not limited to 0, but may be appropriately set in consideration of tolerances, for example, if the calculated azimuth angle θ is greater than 5°, 10° or other thresholds, it is determined that the handheld device is located at the user , and if the calculated azimuth angle θ is less than -5°, -10°, or other thresholds, it is determined that the handheld device is located to the right of the user.
在另一个示例中,判断单元1003可以考虑不止一个身体特征,而是可以考虑两个甚至更多个身体特征,诸如眼睛、肩膀等。图5A和5B示出了针对眼睛计算方位角的示例。如图5A中所示,当手持式设备位于用户的右方时,右眼的方位角θR应小于左眼的方位角θL;而当手持式设备位于用户的左方时,右眼的方位角θR应大于左眼的方位角θL。因此,通过比较两只眼睛的方位角,可以确定手持式设备是位于用户的左方还是右方。In another example, the judging unit 1003 may consider more than one body feature, but may consider two or more body features, such as eyes, shoulders, and the like. 5A and 5B illustrate an example of calculating the azimuth angle for the eye. As shown in Figure 5A, when the handheld device is located to the user's right, the azimuth angle θR of the right eye should be smaller than the azimuth angle θL of the left eye; and when the handheld device is located to the user's left, the azimuth angle of the right eye θR should be greater than the azimuth angle θL of the left eye. Therefore, by comparing the azimuth angles of the two eyes, it can be determined whether the handheld device is located to the left or right of the user.
事实上,从图5A和5B可以看出,手持式设备相对于用户所位于的方向总是对应于具有较小方位角的身体特征(例如眼睛)。举例来说,在图5A中,手持式设备 位于用户的右方,则对应于具有较小方位角的右眼,而在图5B中,手持式设备位于用户的左方,则对应于具有较小方位角的左眼。这意味着使用手与具有较小方位角的身体特征存在同侧关系。In fact, as can be seen from Figures 5A and 5B, the direction in which the handheld device is located relative to the user always corresponds to a body feature (eg eyes) having a smaller azimuth. For example, in FIG. 5A, the handheld device is located to the right of the user, which corresponds to the right eye with a smaller azimuth, while in FIG. 5B, the handheld device is located to the left of the user, which corresponds to a lower azimuth Small azimuth left eye. This means that using the hand has an ipsilateral relationship to body features with smaller azimuths.
在又一个示例中,判断单元1004可以基于与一对身体特征相关联的距离数据来计算身体特征与手持式设备之间的距离。图6A和6B示出了针对眼睛计算距离的示例。应理解,这里考虑的身体特征可以不限于眼睛,而可以是一对左右对称的任何身体特征,例如肩膀。通过如上建立坐标系,则任一眼睛的距离可以计算为
Figure PCTCN2021128556-appb-000002
Figure PCTCN2021128556-appb-000003
其中x是眼睛在X方向(例如水平方向)上的距离,z是眼睛在Z方向(例如传感器平面的法线方向,也就是深度方向)上的距离。
In yet another example, the determination unit 1004 may calculate the distance between the body feature and the handheld device based on distance data associated with a pair of body features. 6A and 6B illustrate an example of calculating distances for eyes. It should be understood that the physical features considered here may not be limited to the eyes, but may be a pair of any physical features that are symmetrical from left to right, such as shoulders. By establishing the coordinate system as above, the distance of either eye can be calculated as
Figure PCTCN2021128556-appb-000002
Figure PCTCN2021128556-appb-000003
Where x is the distance of the eye in the X direction (eg horizontal direction), z is the distance of the eye in the Z direction (eg the normal direction of the sensor plane, that is, the depth direction).
如图6A中所示,当手持式设备位于用户的右方时,右眼的距离dR应小于左眼的距离dL,而当手持设备位于用户的左方时,右眼的距离dR应大于左眼的距离dL。因此,通过比较两只眼睛的距离,可以确定手持式设备位于用户的左方还是右方。同样地,可以发现,手持式设备相对于用户所位于的方向总是对应于具有较小距离的眼睛。这意味着使用手与具有较小距离的身体特征存在同侧关系。As shown in Figure 6A, when the handheld device is located to the right of the user, the distance dR for the right eye should be smaller than the distance dL for the left eye, and when the handheld device is located to the left of the user, the distance dR for the right eye should be greater than the left eye distance dR Eye distance dL. Therefore, by comparing the distance between the two eyes, it can be determined whether the handheld device is located to the left or right of the user. Likewise, it can be found that the direction in which the handheld device is located relative to the user always corresponds to the eye with a smaller distance. This means that using the hand has an ipsilateral relationship to the body feature with a smaller distance.
虽然上面描述了基于与身体特征相关联的深度数据来确定手持式设备相对于用户的方向的几个示例性方法,但是本公开不限于此。可以使用任何方法,只要能够达到相同的目的即可。例如,可以采用机器学习的方法,通过用深度图像或距离数据作为输入训练集、用手持式设备相对于用户的方向作为输出训练集,可以构建模型,诸如神经网络模型,并且在使用时,判断单元1003可以向模型输入相应的深度图像或由获取单元1002获取的距离数据,以得到模型的预测输出。Although several exemplary methods of determining the orientation of a handheld device relative to a user based on depth data associated with body features are described above, the present disclosure is not so limited. Any method can be used as long as it achieves the same purpose. For example, a machine learning approach can be employed, where a model, such as a neural network model, can be constructed using depth images or distance data as an input training set and the orientation of the handheld device relative to the user as an output training set and, in use, judge The unit 1003 may input the corresponding depth image or the distance data obtained by the obtaining unit 1002 to the model to obtain the prediction output of the model.
在确定手持式设备相对于用户的方向后,判断单元1003可以判断用户正在使用哪只手握住手持式设备。通常情况下,当用户左手握住手持式设备时,设备位于用户的左前方,而当用户右手握住手持式设备时,设备位于用户的右前方。由此,判断单元1003可以根据手持式设备相对于用户的方向来相应地判断用户操作手持式设备的使用手。After determining the direction of the handheld device relative to the user, the determination unit 1003 can determine which hand the user is using to hold the handheld device. Typically, when the user holds the handheld device with the left hand, the device is located in front of the user to the left, and when the user holds the handheld device in the right hand, the device is located in the front right of the user. Thus, the determination unit 1003 can determine the hand of the user to operate the handheld device according to the direction of the handheld device relative to the user.
可能存在用户左手握住手持式设备但手持式设备位于用户右方或者相反的情况。在这种情况下,判断单元1003可能会得到错误的判断结果。根据本公开的实施例,处理电路1001还可以利用来自除深度相机以外的附加传感器的感测数据来校正判断单元 1003的判断结果。There may be situations where the user holds the handheld device with the left hand but the handheld device is located to the user's right or vice versa. In this case, the judgment unit 1003 may obtain an erroneous judgment result. According to an embodiment of the present disclosure, the processing circuit 1001 may also correct the judgment result of the judgment unit 1003 using sensing data from additional sensors other than the depth camera.
在一个示例中,处理电路1001可以利用加速度传感器的感测数据来检测手持式设备的运动轨迹,假设手持式设备发生从左到右的运动、跨过了用户正面而位于用户的右方,则处理电路1001可以结合检测到的运动轨迹而将判断单元1003的判断结果校正为使用手为左手,反之亦然。In one example, the processing circuit 1001 can use the sensing data of the acceleration sensor to detect the motion trajectory of the handheld device. If the handheld device moves from left to right, crosses the front of the user and is located to the right of the user, then The processing circuit 1001 can correct the judgment result of the judgment unit 1003 to use the left hand in combination with the detected motion trajectory, and vice versa.
在另一个示例中,处理电路1001可以利用陀螺仪传感器的感测数据来检测手持式设备的姿态,并集合人体工程学和人脸位置来综合判断用户的使用手。In another example, the processing circuit 1001 can use the sensing data of the gyro sensor to detect the posture of the handheld device, and integrate ergonomics and face position to comprehensively judge the user's hand.
在又一个示例中,处理电路1001可以控制显示判断单元1003的判断结果,并提示用户给予确认。举例来说,可以在屏幕上显示“检测到当前的使用手为左手?”,用户可以通过点击“是”或“否”来予以确认。In yet another example, the processing circuit 1001 may control to display the judgment result of the judgment unit 1003, and prompt the user to give confirmation. For example, "Detected that the current using hand is a left hand?" may be displayed on the screen, and the user may confirm by clicking "Yes" or "No".
回到图3A和3B,根据所判断的使用手,处理电路1001的呈现单元1004可以呈现适于用户的使用手操作的UI(即,执行图3B中的步骤S1003)。例如,当判断用户正在用左手握住手持式设备时,呈现单元1004可以将需要用户操作的UI部件呈现为靠近屏幕的左侧,以使得处于用户左手的手指可触及的范围内。相反,当判断用户正在用右手握住手持式设备时,呈现单元1004可以将需要用户操作的UI部件呈现为靠近屏幕的右侧,以使得处于用户右手的手指可触及的范围内。Returning to FIGS. 3A and 3B , according to the determined hand, the presentation unit 1004 of the processing circuit 1001 may present a UI suitable for the user's hand operation (ie, perform step S1003 in FIG. 3B ). For example, when it is determined that the user is holding the handheld device with the left hand, the presenting unit 1004 may present the UI components requiring user operation as being close to the left side of the screen so as to be within reach of the fingers of the user's left hand. On the contrary, when it is judged that the user is holding the handheld device with the right hand, the presenting unit 1004 may present the UI components requiring user operation as being close to the right side of the screen so as to be within reach of the fingers of the user's right hand.
本公开的使用手判断方法可以仅利用设备上普遍配备的深度相机来推断用户正在用哪只手握住并操作手持式设备,而无需配备特定的传感器。此外,本公开的使用手判断方法受光照条件的影响小,可以保证广泛的应用场景和较高的精度。The hand-use judgment method of the present disclosure can only use the depth camera commonly equipped on the device to infer which hand the user is holding and operating the handheld device with, without the need to equip a specific sensor. In addition, the hand judgment method of the present disclosure is less affected by lighting conditions, and can ensure a wide range of application scenarios and high accuracy.
上面所述的方法过程可以在许多场合下执行,以提供用户操作UI的方便性。The method process described above can be performed in many occasions to provide convenience for the user to operate the UI.
例如,当用户解锁手持式设备时,可以执行本公开的技术,例如在用户进行面部识别或按下解锁键的同时启动深度相机拍摄深度图像,并基于所拍摄的深度图像判断使用手,从而在解锁的屏幕上直接呈现适于用户操作的UI。For example, when the user unlocks the handheld device, the technology of the present disclosure can be performed, for example, when the user performs facial recognition or presses the unlock key, the depth camera is activated to capture a depth image, and the hand is determined based on the captured depth image, so as to UI suitable for user operation is directly presented on the unlocked screen.
或者,当通过诸如陀螺仪传感器或加速度传感器等检测到手持式设备的姿态或使用场景发生变化时,例如用户从站立变为躺下、用户拿起手机、用户的左右手交换等,由这种变化触发本公开的方法过程的执行。Or, when a change in the posture or usage scenario of the handheld device is detected through a gyro sensor or an acceleration sensor, for example, the user changes from standing to lying down, the user picks up the phone, the user's left and right hands are swapped, etc. Execution of the method process of the present disclosure is triggered.
此外,当通过相机或其它传感器检测到手持式设备相对于用户的方向发生反转变 化时,例如,从用户的左方移到右方或从右方移到左方,可以启动本公开的使用手判断,手持式设备可以在屏幕上向用户呈现是否改变使用手的询问,如果从用户接收的回应是确认使用手改变,则随后呈现改变的UI,反之如果用户回应使用手未改变,则UI也无需改变。Additionally, use of the present disclosure may be initiated when a reversal change in the orientation of the handheld device relative to the user is detected by a camera or other sensor, eg, moving from the user's left to right or right to left Hand judgment, the handheld device can present a query to the user whether to change the hand on the screen, if the response received from the user is to confirm the change of the hand, then present the changed UI, otherwise, if the user responds that the hand has not changed, the UI No need to change either.
根据本公开的实施例,可以想到各种实现本公开的概念的实现方式,包括但不限于:According to the embodiments of the present disclosure, various implementations of implementing the concepts of the present disclosure can be conceived, including but not limited to:
1)、一种用于手持式设备的电子设备,包括:处理电路,被配置为:获取与所述手持式设备的用户的一个或多个身体特征相关联的距离数据;基于所述距离数据确定所述手持式设备相对于所述用户的方向;基于所确定的方向,判断所述用户当前操作所述手持式设备的使用手;以及根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。1) An electronic device for a handheld device, comprising: a processing circuit configured to: obtain distance data associated with one or more physical characteristics of a user of the handheld device; based on the distance data determining the direction of the handheld device relative to the user; based on the determined direction, judging the hand of the user currently operating the handheld device; and presenting a hand suitable for the user according to the judgment result The user interface (UI) of the operation.
2)、根据1)所述的电子设备,其中,所述处理电路进一步被配置为:基于由深度相机获得的关于所述用户的深度图像,识别所述一个或多个身体特征,并获取所述距离数据。2) The electronic device according to 1), wherein the processing circuit is further configured to: identify the one or more body features based on a depth image about the user obtained by a depth camera, and obtain the distance data.
3)、根据1)所述的电子设备,其中,所述处理电路进一步被配置为:基于由RGB相机获得的关于所述用户的RGB图像,识别所述一个或多个身体特征;以及从由深度相机获得的深度图像中获取与所识别的一个或多个身体特征相关联的距离数据。3) The electronic device according to 1), wherein the processing circuit is further configured to: identify the one or more body features based on an RGB image of the user obtained by an RGB camera; and Distance data associated with the identified one or more body features is obtained from the depth image obtained by the depth camera.
4)、根据1)所述的电子设备,其中,所述处理电路进一步被配置为:基于所述距离数据,计算所述用户的一个或多个身体特征相对于所述手持式设备的方位角,并基于所述方位角确定所述手持式设备相对于所述用户的方向。4) The electronic device according to 1), wherein the processing circuit is further configured to: based on the distance data, calculate an azimuth angle of one or more body features of the user relative to the handheld device , and determine the orientation of the handheld device relative to the user based on the azimuth.
5)、根据1)所述的电子设备,其中,所述处理电路进一步被配置为:基于所述距离数据,计算并比较所述用户的一对左右对称的身体特征与所述手持式设备之间的距离,并基于所述距离确定所述手持式设备相对于所述用户的方向。5) The electronic device according to 1), wherein the processing circuit is further configured to: based on the distance data, calculate and compare a pair of left-right symmetrical body features of the user with the hand-held device. and determine the orientation of the handheld device relative to the user based on the distance.
6)、根据1)所述的电子设备,其中,所述身体特征包括以下至少之一:鼻尖、下巴、前额、嘴巴、眼睛、肩膀。6) The electronic device according to 1), wherein the physical features include at least one of the following: nose tip, chin, forehead, mouth, eyes, shoulders.
7)、根据1)所述的电子设备,其中,所述处理电路进一步被配置为:在所述 手持式设备相对于所述用户的方向发生反转变化时,向所述用户呈现是否改变使用手的询问;从所述用户接收对于询问的回应;根据接收到的回应确定是否改变所述用户界面(UI)。7) The electronic device according to 1), wherein the processing circuit is further configured to: when the direction of the handheld device relative to the user is reversed and changed, present to the user whether to change the usage A hand query; receiving a response to the query from the user; determining whether to change the user interface (UI) according to the received response.
8)、根据1)所述的电子设备,其中,所述深度相机是以下之一:飞行时间(TOF)相机、结构光相机、双目立体视觉相机。8) The electronic device according to 1), wherein the depth camera is one of the following: a time-of-flight (TOF) camera, a structured light camera, and a binocular stereo vision camera.
9)、一种手持式设备,包括:深度相机,用于获得关于所述手持式设备的用户的深度图像;以及处理电路,被配置为:基于所述深度图像,获取与所述用户的一个或多个身体特征相关联的距离数据;基于所述距离数据确定所述手持式设备相对于所述用户的方向;基于所确定的方向,判断所述用户当前操作所述手持式设备的使用手;根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。9) A handheld device, comprising: a depth camera for obtaining a depth image about a user of the handheld device; and a processing circuit configured to: based on the depth image, obtain a depth image related to the user or distance data associated with multiple body features; determine the direction of the handheld device relative to the user based on the distance data; determine the hand of the user currently operating the handheld device based on the determined direction ; present a user interface (UI) suitable for the user's hand-operated operation according to the judgment result.
10)、根据9)所述的手持式设备,其中,所述处理电路进一步被配置为:基于所述深度图像,识别所述一个或多个身体特征,并获取所述距离数据。10) The handheld device according to 9), wherein the processing circuit is further configured to: based on the depth image, identify the one or more body features, and obtain the distance data.
11)、根据9)所述的手持式设备,还包括用于捕获RGB图像的RGB相机,其中,所述处理电路进一步被配置为:基于由RGB相机捕获的关于所述用户的RGB图像,识别所述一个或多个身体特征;以及从由深度相机获得的深度图像中获取与所识别的一个或多个身体特征相关联的距离数据。11) The handheld device according to 9), further comprising an RGB camera for capturing an RGB image, wherein the processing circuit is further configured to: based on the RGB image about the user captured by the RGB camera, identify the one or more body features; and obtaining distance data associated with the identified one or more body features from a depth image obtained by the depth camera.
12)、根据9)所述的手持式设备,其中,所述处理电路进一步被配置为:基于所述距离数据,计算所述用户的一个或多个身体特征相对于所述手持式设备的方位角,并基于所述方位角确定所述手持式设备相对于所述用户的方向。12) The handheld device according to 9), wherein the processing circuit is further configured to: based on the distance data, calculate an orientation of one or more physical features of the user relative to the handheld device angle and determine the orientation of the handheld device relative to the user based on the azimuth angle.
13)、根据9)所述的手持式设备,其中,所述处理电路进一步被配置为:基于所述距离数据,计算并比较所述用户的一对左右对称的身体特征与所述手持式设备之间的距离,并基于所述距离确定所述手持式设备相对于所述用户的方向。13) The handheld device according to 9), wherein the processing circuit is further configured to: based on the distance data, calculate and compare a pair of left-right symmetrical body features of the user with the handheld device distance between and determine the orientation of the handheld device relative to the user based on the distance.
14)、根据9)所述的手持式设备,其中,所述深度相机是以下之一:飞行时间(TOF)相机、结构光相机、双目立体视觉相机。14) The handheld device according to 9), wherein the depth camera is one of the following: a time-of-flight (TOF) camera, a structured light camera, and a binocular stereo camera.
15)、一种用于手持式设备的方法,包括:获取与所述手持式设备的用户的一个或多个身体特征相关联的距离数据;基于所述距离数据确定所述手持式设备相对于所述用户的方向;基于所确定的方向,判断所述用户当前操作所述手持式设备的 使用手;以及根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。15) A method for a handheld device, comprising: obtaining distance data associated with one or more physical characteristics of a user of the handheld device; the direction of the user; based on the determined direction, judging the hand of the user currently operating the handheld device; and presenting a user interface (UI) suitable for the hand operation of the user according to the judgment result.
16)、一种存储有可执行指令的非暂时性计算机可读存储介质,所述可执行指令当被执行时实现根据15)述的方法。16) A non-transitory computer-readable storage medium storing executable instructions which, when executed, implement the method according to 15).
以上参照附图描述了本公开的示例性实施例,但是本公开当然不限于以上示例。本领域技术人员可在所附权利要求的范围内得到各种变更和修改,并且应理解这些变更和修改自然将落入本公开的技术范围内。Exemplary embodiments of the present disclosure have been described above with reference to the accompanying drawings, but the present disclosure is not limited to the above examples, of course. Those skilled in the art may find various changes and modifications within the scope of the appended claims, and it should be understood that they will naturally come under the technical scope of the present disclosure.
例如,在以上实施例中包括在一个单元中的多个功能可以由分开的装置来实现。替选地,在以上实施例中由多个单元实现的多个功能可分别由分开的装置来实现。另外,以上功能之一可由多个单元来实现。无需说,这样的配置包括在本公开的技术范围内。For example, a plurality of functions included in one unit in the above embodiments may be implemented by separate devices. Alternatively, multiple functions implemented by multiple units in the above embodiments may be implemented by separate devices, respectively. Additionally, one of the above functions may be implemented by multiple units. Needless to say, such a configuration is included in the technical scope of the present disclosure.
在该说明书中,流程图中所描述的步骤不仅包括以所述顺序按时间序列执行的处理,而且包括并行地或单独地而不是必须按时间序列执行的处理。此外,甚至在按时间序列处理的步骤中,无需说,也可以适当地改变该顺序。In this specification, the steps described in the flowcharts include not only processing performed in time series in the stated order, but also processing performed in parallel or individually rather than necessarily in time series. Furthermore, even in the steps processed in time series, needless to say, the order can be appropriately changed.
虽然已经详细说明了本公开及其优点,但是应当理解在不脱离由所附的权利要求所限定的本公开的精神和范围的情况下可以进行各种改变、替代和变换。而且,本公开实施例的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。Although the present disclosure and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. Furthermore, the terms "comprising", "comprising" or any other variation thereof in embodiments of the present disclosure are intended to encompass a non-exclusive inclusion such that a process, method, article or apparatus comprising a series of elements includes not only those elements, but also Include other elements not expressly listed, or which are inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a..." does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

Claims (16)

  1. 一种用于手持式设备的电子设备,包括:An electronic device for a handheld device, comprising:
    处理电路,被配置为:processing circuitry, configured as:
    获取与所述手持式设备的用户的一个或多个身体特征相关联的距离数据;obtaining distance data associated with one or more physical characteristics of the user of the handheld device;
    基于所述距离数据确定所述手持式设备相对于所述用户的方向;determining an orientation of the handheld device relative to the user based on the distance data;
    基于所确定的方向,判断所述用户当前操作所述手持式设备的使用手;以及Based on the determined direction, determining the hand of the user currently operating the handheld device; and
    根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。According to the result of the judgment, a user interface (UI) suitable for the user's manual operation is presented.
  2. 根据权利要求1所述的电子设备,其中,所述处理电路进一步被配置为:The electronic device of claim 1, wherein the processing circuit is further configured to:
    基于由深度相机获得的关于所述用户的深度图像,识别所述一个或多个身体特征,并获取所述距离数据。The one or more body features are identified based on a depth image of the user obtained by a depth camera, and the distance data is obtained.
  3. 根据权利要求1所述的电子设备,其中,所述处理电路进一步被配置为:The electronic device of claim 1, wherein the processing circuit is further configured to:
    基于由RGB相机获得的关于所述用户的RGB图像,识别所述一个或多个身体特征;以及identifying the one or more physical features based on an RGB image of the user obtained by an RGB camera; and
    从由深度相机获得的深度图像中获取与所识别的一个或多个身体特征相关联的距离数据。Distance data associated with the identified one or more body features is obtained from the depth image obtained by the depth camera.
  4. 根据权利要求1所述的电子设备,其中,所述处理电路进一步被配置为:The electronic device of claim 1, wherein the processing circuit is further configured to:
    基于所述距离数据,计算所述用户的一个或多个身体特征相对于所述手持式设备的方位角,并基于所述方位角确定所述手持式设备相对于所述用户的方向。Based on the distance data, an azimuth angle of one or more physical features of the user relative to the handheld device is calculated, and an orientation of the handheld device relative to the user is determined based on the azimuth angle.
  5. 根据权利要求1所述的电子设备,其中,所述处理电路进一步被配置为:The electronic device of claim 1, wherein the processing circuit is further configured to:
    基于所述距离数据,计算并比较所述用户的一对左右对称的身体特征与所述手持式设备之间的距离,并基于所述距离确定所述手持式设备相对于所述用户的方向。Based on the distance data, a distance between a pair of left-right symmetrical body features of the user and the handheld device is calculated and compared, and an orientation of the handheld device relative to the user is determined based on the distance.
  6. 根据权利要求1所述的电子设备,其中,所述身体特征包括以下至少之一:鼻尖、下巴、前额、嘴巴、眼睛、肩膀。The electronic device of claim 1, wherein the physical feature comprises at least one of the following: nose tip, chin, forehead, mouth, eyes, shoulders.
  7. 根据权利要求1所述的电子设备,其中,所述处理电路进一步被配置为:The electronic device of claim 1, wherein the processing circuit is further configured to:
    在所述手持式设备相对于所述用户的方向发生反转变化时,向所述用户呈现是否改变使用手的询问;presenting a query to the user whether to change the hand used when the orientation of the handheld device relative to the user is reversed;
    从所述用户接收对于询问的回应;receive a response to a query from the user;
    根据接收到的回应确定是否改变所述用户界面(UI)。Whether to change the user interface (UI) is determined according to the received response.
  8. 根据权利要求1所述的电子设备,其中,所述深度相机是以下之一:飞行时间(TOF)相机、结构光相机、双目立体视觉相机。The electronic device of claim 1, wherein the depth camera is one of the following: a time-of-flight (TOF) camera, a structured light camera, and a binocular stereo camera.
  9. 一种手持式设备,包括:A handheld device comprising:
    深度相机,用于获得关于所述手持式设备的用户的深度图像;以及a depth camera for obtaining depth images about a user of the handheld device; and
    处理电路,被配置为:processing circuitry, configured as:
    基于所述深度图像,获取与所述用户的一个或多个身体特征相关联的距离数据;obtaining distance data associated with one or more physical features of the user based on the depth image;
    基于所述距离数据确定所述手持式设备相对于所述用户的方向;determining an orientation of the handheld device relative to the user based on the distance data;
    基于所确定的方向,判断所述用户当前操作所述手持式设备的使用手;Based on the determined direction, determine the hand of the user currently operating the handheld device;
    根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。According to the result of the judgment, a user interface (UI) suitable for the user's manual operation is presented.
  10. 根据权利要求9所述的手持式设备,其中,所述处理电路进一步被配置为:The handheld device of claim 9, wherein the processing circuit is further configured to:
    基于所述深度图像,识别所述一个或多个身体特征,并获取所述距离数据。Based on the depth image, the one or more body features are identified, and the distance data is obtained.
  11. 根据权利要求9所述的手持式设备,还包括用于捕获RGB图像的RGB相机,The handheld device of claim 9, further comprising an RGB camera for capturing RGB images,
    其中,所述处理电路进一步被配置为:Wherein, the processing circuit is further configured to:
    基于由RGB相机捕获的关于所述用户的RGB图像,识别所述一个或多个身体特征;以及identifying the one or more physical features based on an RGB image of the user captured by an RGB camera; and
    从由深度相机获得的深度图像中获取与所识别的一个或多个身体特征相关联的距离数据。Distance data associated with the identified one or more body features is obtained from the depth image obtained by the depth camera.
  12. 根据权利要求9所述的手持式设备,其中,所述处理电路进一步被配置为:The handheld device of claim 9, wherein the processing circuit is further configured to:
    基于所述距离数据,计算所述用户的一个或多个身体特征相对于所述手持式设 备的方位角,并基于所述方位角确定所述手持式设备相对于所述用户的方向。Based on the distance data, an azimuth angle of one or more physical features of the user relative to the handheld device is calculated, and an orientation of the handheld device relative to the user is determined based on the azimuth angle.
  13. 根据权利要求9所述的手持式设备,其中,所述处理电路进一步被配置为:The handheld device of claim 9, wherein the processing circuit is further configured to:
    基于所述距离数据,计算并比较所述用户的一对左右对称的身体特征与所述手持式设备之间的距离,并基于所述距离确定所述手持式设备相对于所述用户的方向。Based on the distance data, a distance between a pair of left-right symmetrical body features of the user and the handheld device is calculated and compared, and an orientation of the handheld device relative to the user is determined based on the distance.
  14. 根据权利要求9所述的手持式设备,其中,所述深度相机是以下之一:飞行时间(TOF)相机、结构光相机、双目立体视觉相机。The handheld device of claim 9, wherein the depth camera is one of: a time-of-flight (TOF) camera, a structured light camera, a binocular stereo camera.
  15. 一种用于手持式设备的方法,包括:A method for a handheld device, comprising:
    获取与所述手持式设备的用户的一个或多个身体特征相关联的距离数据;obtaining distance data associated with one or more physical characteristics of the user of the handheld device;
    基于所述距离数据确定所述手持式设备相对于所述用户的方向;determining an orientation of the handheld device relative to the user based on the distance data;
    基于所确定的方向,判断所述用户当前操作所述手持式设备的使用手;以及based on the determined direction, determining the hand of the user currently operating the handheld device; and
    根据判断的结果,呈现适于所述用户的使用手操作的用户界面(UI)。According to the result of the judgment, a user interface (UI) suitable for the user's manual operation is presented.
  16. 一种存储有可执行指令的非暂时性计算机可读存储介质,所述可执行指令当被执行时实现根据权利要求15所述的方法。A non-transitory computer-readable storage medium storing executable instructions that, when executed, implement the method of claim 15 .
PCT/CN2021/128556 2020-11-04 2021-11-04 Electronic device, method and storage medium WO2022095915A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202180073970.2A CN116391163A (en) 2020-11-04 2021-11-04 Electronic device, method, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011216016.8 2020-11-04
CN202011216016.8A CN114449069A (en) 2020-11-04 2020-11-04 Electronic device, method, and storage medium

Publications (1)

Publication Number Publication Date
WO2022095915A1 true WO2022095915A1 (en) 2022-05-12

Family

ID=81362133

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/128556 WO2022095915A1 (en) 2020-11-04 2021-11-04 Electronic device, method and storage medium

Country Status (2)

Country Link
CN (2) CN114449069A (en)
WO (1) WO2022095915A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262438A (en) * 2010-05-18 2011-11-30 微软公司 Gestures and gesture recognition for manipulating a user-interface
CN105302448A (en) * 2014-06-18 2016-02-03 中兴通讯股份有限公司 Method and apparatus for adjusting interface of mobile terminal and terminal
US20160224123A1 (en) * 2015-02-02 2016-08-04 Augumenta Ltd Method and system to control electronic devices through gestures
CN108572730A (en) * 2017-03-10 2018-09-25 埃尔特菲斯项目公司 System and method for using depth perception camera and computer implemented interactive application to interact
CN109656359A (en) * 2018-11-26 2019-04-19 深圳奥比中光科技有限公司 3D body feeling interaction adaptation method, system, terminal device and readable storage medium storing program for executing
WO2019244645A1 (en) * 2018-06-20 2019-12-26 ソニー株式会社 Program, recognition device, and recognition method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111125659A (en) * 2018-10-31 2020-05-08 北京小米移动软件有限公司 Input component, unlocking method, electronic device and machine-readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102262438A (en) * 2010-05-18 2011-11-30 微软公司 Gestures and gesture recognition for manipulating a user-interface
CN105302448A (en) * 2014-06-18 2016-02-03 中兴通讯股份有限公司 Method and apparatus for adjusting interface of mobile terminal and terminal
US20160224123A1 (en) * 2015-02-02 2016-08-04 Augumenta Ltd Method and system to control electronic devices through gestures
CN108572730A (en) * 2017-03-10 2018-09-25 埃尔特菲斯项目公司 System and method for using depth perception camera and computer implemented interactive application to interact
WO2019244645A1 (en) * 2018-06-20 2019-12-26 ソニー株式会社 Program, recognition device, and recognition method
CN109656359A (en) * 2018-11-26 2019-04-19 深圳奥比中光科技有限公司 3D body feeling interaction adaptation method, system, terminal device and readable storage medium storing program for executing

Also Published As

Publication number Publication date
CN116391163A (en) 2023-07-04
CN114449069A (en) 2022-05-06

Similar Documents

Publication Publication Date Title
US11222440B2 (en) Position and pose determining method, apparatus, smart device, and storage medium
JP6858650B2 (en) Image registration method and system
US10657363B2 (en) Method and devices for authenticating a user by image, depth, and thermal detection
KR101722654B1 (en) Robust tracking using point and line features
EP2984541B1 (en) Near-plane segmentation using pulsed light source
WO2019223468A1 (en) Camera orientation tracking method and apparatus, device, and system
US9465444B1 (en) Object recognition for gesture tracking
WO2018107679A1 (en) Method and device for acquiring dynamic three-dimensional image
CN110456938B (en) False touch prevention method for curved screen and electronic equipment
CN112005548B (en) Method of generating depth information and electronic device supporting the same
US20150009119A1 (en) Built-in design of camera system for imaging and gesture processing applications
WO2020237611A1 (en) Image processing method and apparatus, control terminal and mobile device
KR101985674B1 (en) Method of recognizing contactless user interface motion and System there-of
US10607069B2 (en) Determining a pointing vector for gestures performed before a depth camera
US11048923B2 (en) Electronic device and gesture recognition method thereof
WO2022222658A1 (en) Groove depth measurement method, apparatus and system, and laser measurement device
JP2020525958A (en) Image processing system and image processing method
CN112351188B (en) Apparatus and method for displaying graphic element according to object
CN115526983A (en) Three-dimensional reconstruction method and related equipment
JP2016177491A (en) Input device, fingertip position detection method, and fingertip position detection computer program
CN114022532A (en) Height measuring method, height measuring device and terminal
CN111127541B (en) Method and device for determining vehicle size and storage medium
WO2022095915A1 (en) Electronic device, method and storage medium
WO2021101675A1 (en) Imaging system configured to use time-of-flight imaging and stereo imaging
WO2022161011A1 (en) Method for generating image and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21888610

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21888610

Country of ref document: EP

Kind code of ref document: A1