WO2020237658A1 - 电子设备的控制方法、电子设备和计算机可读存储介质 - Google Patents

电子设备的控制方法、电子设备和计算机可读存储介质 Download PDF

Info

Publication number
WO2020237658A1
WO2020237658A1 PCT/CN2019/089615 CN2019089615W WO2020237658A1 WO 2020237658 A1 WO2020237658 A1 WO 2020237658A1 CN 2019089615 W CN2019089615 W CN 2019089615W WO 2020237658 A1 WO2020237658 A1 WO 2020237658A1
Authority
WO
WIPO (PCT)
Prior art keywords
human eye
area
visible light
pixel position
zero
Prior art date
Application number
PCT/CN2019/089615
Other languages
English (en)
French (fr)
Inventor
王路
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to CN201980095503.2A priority Critical patent/CN113711229B/zh
Priority to PCT/CN2019/089615 priority patent/WO2020237658A1/zh
Priority to EP19931080.6A priority patent/EP3975034A4/en
Publication of WO2020237658A1 publication Critical patent/WO2020237658A1/zh
Priority to US17/532,448 priority patent/US11836956B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/145Illumination specially adapted for pattern recognition, e.g. using gratings
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Definitions

  • This application relates to the field of consumer electronic products, in particular to a control method of an electronic device, an electronic device and a computer-readable storage medium.
  • a structured light projector and a structured light camera can be installed on the electronic device.
  • the structured light projector is used to project a laser pattern into the target space
  • the structured light camera is used to collect the laser pattern to obtain depth information.
  • the embodiments of the present application provide a method for controlling an electronic device, an electronic device, and a computer-readable storage medium.
  • the embodiment of the present application provides a control method of an electronic device, the electronic device includes an infrared transmitter, an infrared sensor, and a visible light sensor, and the control method includes: obtaining the original pixel position of the zero-level area on the infrared sensor; obtaining The human eye pixel position of the human eye area on the visible light sensor; judge whether the human eye enters the zero-level area according to the original pixel position and the human eye pixel position; when the human eye enters the zero-level area, The protection mechanism of the infrared transmitter is triggered.
  • the embodiment of the present application provides an electronic device including an infrared transmitter, an infrared sensor, a visible light sensor, and a processor, and the processor is configured to: obtain the original pixel position of the zero-level area on the infrared sensor; The human eye pixel position of the human eye area on the visible light sensor; judge whether the human eye enters the zero-level area according to the original pixel position and the human eye pixel position; when the human eye enters the zero-level area, The protection mechanism of the infrared transmitter is triggered.
  • the embodiments of the present application provide a non-volatile computer-readable storage medium containing computer-readable instructions, which when executed by a processor, cause the processor to execute the electronic device control method of the above-mentioned embodiment .
  • FIG. 1 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application
  • FIG. 2 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of an application scenario of an electronic device control method according to some embodiments of the present application.
  • FIG. 4 is a schematic diagram of an application scenario of an electronic device control method according to some embodiments of the present application.
  • FIG. 5 is a schematic diagram of an application scenario of an electronic device control method according to some embodiments of the present application.
  • FIG. 6 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 7 is a schematic diagram of an application scenario of an electronic device control method according to some embodiments of the present application.
  • FIG. 8 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 9 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 10 is a schematic diagram of an application scenario of an electronic device control method according to some embodiments of the present application.
  • FIG. 11 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 12 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 13 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 14 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • 15 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 16 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 17 is a schematic diagram of the infrared emitter of the electronic device of the embodiment of the present application reducing current to project laser light;
  • 18 is a schematic diagram of the infrared emitter of the electronic device of the embodiment of the present application reducing the pulse width to project laser light;
  • 19 is a schematic diagram of the infrared emitter of the electronic device of the embodiment of the present application reducing the frame rate to project laser;
  • 20 is a schematic flowchart of a control method of an electronic device according to some embodiments of the present application.
  • FIG. 21 is a schematic diagram of interaction between a computer-readable storage medium and an electronic device according to some embodiments of the present application.
  • An embodiment of the present application provides a method for controlling an electronic device 10.
  • the electronic device 10 includes an infrared transmitter 11, an infrared sensor 12 and a visible light sensor 13.
  • Control methods include:
  • 011 Obtain the original pixel position 21 of the zero-level area 20 on the infrared sensor 12;
  • 012 Obtain the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13;
  • 013 Determine whether the human eye enters the zero-level area 20 according to the original pixel position 21 and the human eye pixel position 31;
  • an embodiment of the present application also provides an electronic device 10.
  • the electronic device 10 includes an infrared transmitter 11, an infrared sensor 12, a visible light sensor 13 and a processor 14.
  • the control method of the electronic device 10 of the embodiment of the present application can be implemented by the electronic device 10 of the embodiment of the present application.
  • the processor 14 may be used to execute the methods in 011, 012, 013, and 014.
  • the processor 14 can be used to: obtain the original pixel position 21 of the zero-level area 20 on the infrared sensor 12; obtain the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13; according to the original pixel position 21 And the pixel position 31 of the human eye determines whether the human eye enters the zero-level area 20; when the human eye enters the zero-level area 20, the protection mechanism of the infrared transmitter 11 is triggered.
  • 3D structured light is mainly used for front face unlocking and background blurring.
  • the 3D structured light uses a laser source to project a fixed pattern of speckles on a space object, and at the same time uses an infrared camera with a specific transmission band to shoot the speckles, and then judge the depth and distance of the object based on the position information of each speckle.
  • the laser source is composed of a Vertical Cavity Surface Emitting Laser (VCSEL), a collimating lens, and Diffractive Optical Elements (DOE).
  • VCSEL Vertical Cavity Surface Emitting Laser
  • DOE Diffractive Optical Elements
  • the wavelength of the laser emitted by the VCSEL is 940nm, which is collimated by the collimator lens first.
  • DOE diffraction the beam is shaped and projected uniformly into the space to form a speckle field.
  • the DOE diffraction pattern is realized by multi-zone replication. During use, the DOE is damaged by external force or water mist enters the laser source and adheres to the surface of the DOE in a humid environment, etc., which will cause the energy of the DOE zero-order region to increase significantly. , Causing harm to human eyes.
  • Figure 4 is the speckle image projected by the laser source without zero-level enhancement. In normal use, the speckle will not cause damage to the human eye.
  • Figure 5 is the speckle image projected by the laser source in the zero-level enhancement. If the speckle in the zero-level area is projected to the human eye, it will cause damage to the human eye, while the speckle in the non-zero-level area is projected to the human eye. Causes damage to human eyes.
  • the control method of the electronic device 10 and the electronic device 10 determine the person based on the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 and the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13 Whether the eye enters the zero-level area 20, so that when the human eye enters the zero-level area 20, the protection mechanism of the infrared transmitter 11 is triggered, which can avoid damage to the human eye due to the excessively strong zero-level light beam.
  • the electronic device 10 may be a mobile phone, a tablet computer, a notebook computer, a smart bracelet, a smart watch, etc.
  • the embodiments of the present application are described by taking the electronic device 10 as a mobile phone as an example. It can be understood that the specific form of the electronic device 10 may be other, which is not limited here.
  • the electronic device 10 may further include a display screen 15 and a housing 16.
  • the infrared transmitter 11, the infrared sensor 12, the visible light sensor 13 and the display screen 15 can all be installed on the housing 16.
  • the housing 16 includes a front surface 161 and a back surface 162, and the front surface 161 is opposite to the back surface 162.
  • the front surface 161 is used for installing the display screen 15, and the display screen 15 can be used to display information such as images and texts, or to receive user touch operations.
  • the infrared transmitter 11, infrared sensor 12, and visible light sensor 13 can be installed on the front 161 for users to take selfies or video calls; the infrared transmitter 11, infrared sensor 12, and visible light sensor 13 can also be installed on the back 162 for easy shooting Scenery and others.
  • the infrared transmitter 11 and the infrared sensor 12 may constitute a structured light (Structured Light) module.
  • the infrared transmitter 11 is used to project a laser pattern into the target space, and the infrared sensor 12 is used to collect the laser pattern to obtain depth information.
  • the infrared transmitter 11 and the infrared sensor 12 may constitute a Time of Flight (TOF) module.
  • TOF Time of Flight
  • the infrared transmitter 11 is used to emit laser light into the target space, and the infrared sensor 12 is used to receive the reflected laser light and obtain depth information according to the time difference between emission and reception.
  • the visible light sensor 13 can constitute a visible light module.
  • the visible light module can be a telephoto camera, a wide-angle camera, or a periscope camera.
  • the visible light sensor 13 can be located between the infrared emitter 11 and the infrared sensor 12, so that there is a longer distance between the infrared emitter 11 and the infrared sensor 12, and the base line length of the structured light module or TOF module is increased , Thereby improving the accuracy of obtaining depth information.
  • the current structured light modules are all used as the front, in addition, a distance sensor needs to be set to judge the distance to protect the safety of the human eyes. If the structured light module is used as a rear, there are the following shortcomings: 1. A distance sensor needs to be added to judge the distance to protect the safety of the human eye (low power projection laser is used when the distance is short, and normal power projection laser is used when the distance is long. ), resulting in increased costs; 2. It will take up the space on the back of the phone, which is not conducive to product design; 3. The modeling of close-up small objects is limited by the distance sensor; 4. The proximity of non-eyes and skin-like objects will also trigger eye protection , The interactive experience is not good.
  • the embodiment of the present application uses the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13 and the original pixel position 21 of the zero-level area 20 of the zero-level area 20 on the infrared sensor 12 to determine whether the human eye enters the zero-level area 20. If the human eye does not enter the zero-level area 20, the infrared transmitter 11 works normally; if the human eye enters the zero-level area 20, the protection mechanism of the infrared transmitter 11 is triggered, and the working mode of the infrared transmitter 11 is adjusted to avoid zero The beam is too strong to cause damage to human eyes.
  • the structured light module or TOF module of the embodiment of the present application can be used for the front or the rear, and when used for the rear, there is no need to add a distance sensor, which saves costs; at the same time, the back of the electronic device 10 Saves space; it is also possible to perform close-up modeling of non-eye and skin objects.
  • the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13 refers to the coordinate position of the human eye area 30 on the visible light sensor 13.
  • the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 refers to the coordinate position of the zero-level area 20 on the infrared sensor 12.
  • the coordinate position is generally fixed and the processor 14 can be used by the user
  • the electronic device 10 has previously acquired and stored it, and uses a predetermined period to update the coordinate position, so as to prevent the position of the zero-level area 20 from shifting during long-term use (may be caused by the position shift of the DOE or the damage of the DOE ).
  • Obtaining the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 in advance can save the time required for the electronic device 10 to determine whether the human eye enters the zero-level area 20.
  • the processor 14 can also simultaneously obtain the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 and obtain the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13; or, the processor 14 first obtains the human The eye area 30 is at the human eye pixel position 31 on the visible light sensor 13, and the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 is obtained.
  • the process for the processor 14 to obtain the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 may be as follows: first determine the position of the zero-level area 20 of the infrared transmitter 11, which can be set evenly on the light exit of the infrared transmitter 11. Multiple light detection elements, such as photo-diodes (PD), are used to detect the intensity of the laser light projected at each position of the light exit of the infrared emitter 11, and the position with the strongest laser intensity is the position of the zero-level region 20; The relative positions of the infrared sensor 12 and the visible light sensor 13 are fixed. Therefore, the original pixel position 21 of the zero-level area 20 on the infrared sensor 12 can be converted according to the position of the zero-level area 20 of the infrared transmitter 11.
  • PD photo-diodes
  • control method of the electronic device 10 further includes:
  • 0632 Perform a coordinate comparison between the target pixel position 22 and the human eye pixel position 31 to determine whether the human eye enters the zero-level area 20.
  • the processor 14 may be used to execute the methods in 0631 and 0632.
  • the processor 14 can be used to: convert the original pixel position 21 into the target pixel position 22 of the zero-level area 20 on the visible light sensor 13; compare the target pixel position 22 with the human eye pixel position 31 for coordinate comparison Determine whether the human eye enters the zero-level zone 20.
  • 061, 062, and 064 in FIG. 6 can refer to the description of 011, 012, and 014 in the specification of this application, which will not be repeated here.
  • the distance between the infrared sensor 12 and the visible light sensor 13 is fixed, therefore, the distance between the infrared sensor 12 and the visible light sensor 13 can be
  • the original pixel position 21 on the infrared sensor 12 is converted into the target pixel position 22 on the visible light sensor 13.
  • the processor 14 After obtaining the original pixel position 21 of the zero-level area 20 on the infrared sensor 12, the processor 14 converts the original pixel position 21 into the target pixel position 22 of the zero-level area 20 on the visible light sensor 13 so as to be on the visible light sensor 13 Under the coordinates of, the coordinate comparison between the human eye pixel position 31 of the human eye area 30 and the target pixel position 22 of the zero-level area 20 is performed to determine whether the human eye enters the zero-level area 20. Similar to pre-acquiring the original pixel position 21 of the zero-level area 20 on the infrared sensor 12, the processor 14 may also pre-acquire and store the target pixel position 22 of the zero-level area 20 on the visible light sensor 13, and update the pixel position 22 in a predetermined period. The coordinate position is to save the time required for the electronic device 10 to determine whether the human eye enters the zero-level area 20, and at the same time ensure the accuracy of the obtained target pixel position 22.
  • the processor 14 may also convert the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13 into the target pixel position 31 of the human eye area 30 on the infrared sensor 12 to be on the infrared sensor. Under the coordinates of 12, a coordinate comparison between the target pixel position 31 of the human eye area 30 and the original pixel position 21 of the zero-level area 20 is performed to determine whether the human eye enters the zero-level area 20.
  • the processor 14 may determine whether the human eye enters the zero-level region 20 according to the ratio of the overlapping portion of the human eye pixel position 31 and the target pixel position 22 to the human eye pixel position 31, if the ratio is greater than a predetermined ratio, for example 60%, it is judged that the human eye has entered the zero-level zone 20, and this judgment method can ensure human eye safety to a greater extent.
  • a predetermined ratio for example 60%
  • control method of the electronic device 10 further includes:
  • the visible light sensor 13 can be used to execute the method in 0821, and the processor 14 can be used to execute the method in 0824.
  • the visible light sensor 13 can be used to obtain a visible light image.
  • the processor 14 may be used to obtain the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13 when the human eye area 30 is included in the visible light image.
  • obtaining a visible light image through the visible light sensor 13 may be performed when the infrared transmitter 11 is turned on. That is to say, when the electronic device 10 turns on the infrared transmitter 11 according to the user's control or as needed, the visible light sensor 13 is turned on synchronously to collect visible light images, which is beneficial to more quickly obtain the human eye area 30 on the visible light sensor 13
  • the eye pixel position 31 can save the time required for the electronic device 10 to determine whether the human eye enters the zero-level region 20. If the human eye does not enter the zero-level area 20, the time for the electronic device 10 to obtain depth information can also be saved.
  • the infrared transmitter 11 when the human eye does not enter the zero-level zone 20, the infrared transmitter 11 is turned on. In other words, the visible light sensor 13 is first turned on to collect visible light images to determine whether the human eye enters the zero-level area 20. When the human eye does not enter the zero-level area 20, the infrared transmitter 11 is turned on to fully protect the human eye.
  • the electronic device 10 includes a human eye model library, and the control method of the electronic device 10 further includes:
  • 0923 Calculate the correlation between the visible light image and the human eye template image to determine whether the human eye area 30 is included in the visible light image.
  • the electronic device 10 includes a human eye model library, and the human eye model library stores human eye template images.
  • the processor 14 can be used to execute the methods in 0922 and 0923.
  • the processor 14 may be used to: read the visible light image and the human eye template image; calculate the correlation between the visible light image and the human eye template image to determine whether the human eye area 30 is included in the visible light image.
  • 091, 0931, 0921, 0924, 0932, and 094 in FIG. 9 can refer to the description of 011, 0631, 0821, 0824, 0632 and 014 in the specification of this application, which will not be repeated here. .
  • the visible light image acquired by the visible light sensor 13 is P0.
  • the human eye model library may store multiple human eye template images, which are respectively P1, P2, P3, etc., wherein the multiple human eye template images may include left eye images and right eye images.
  • the processor 14 reads the visible light image P0 and the human eye template image P1, and then calculates the correlation between the visible light image P0 and the human eye template image P1 to determine whether the human eye region 30 is included in the visible light image.
  • the processor 14 continues to read the human eye template image P2, and then calculates the correlation between the visible light image P0 and the human eye template image P2 to determine whether the visible light image is Including the human eye area 30. If yes, it is determined that the human eye area 30 is included in the visible light image; if not, the processor 14 continues to read the human eye template image P3, and then calculates the correlation between the visible light image P0 and the human eye template image P3 to determine whether the visible light image is Including the human eye area 30.
  • the processor 14 determines that the human eye region 30 is not included in the visible light image until the processor 14 determines that the human eye region 30 is not included in the visible light image according to the correlation between the visible light image P0 and each human eye template image. If the processor 14 determines that the visible light image includes the human eye area 30 based on the correlation between the visible light image P0 and any human eye template image, it will finally determine that the visible light image includes the human eye area 30, and stop reading the remaining human eye templates The image, and the correlation between the visible light image P0 and the remaining human eye template image is calculated to determine whether the human eye area 30 is included in the visible light image.
  • the processor 14 may determine whether the human eye region 30 is included in the visible light image according to the value of the correlation between the visible light image and the human eye template image. When the visible light image is more similar to the human eye template image, the correlation value is higher. Therefore, the processor 14 may determine that the human eye region 30 is included in the visible light image when the correlation value is greater than (or equal to) the predetermined value; when the correlation value is less than (or equal to) the predetermined value, determine that the visible light image does not include Human eye area 30.
  • control method of the electronic device 10 further includes:
  • 011232 Obtain the gray level parameters of the visible light image and the gray level parameters of the human eye template image respectively;
  • 011234 Traverse the visible light image according to the gray level parameters of the human eye template image to obtain the matching area in the visible light image, where the gray level parameters of the matching area match the gray level parameters of the human eye template image;
  • 011236 Determine whether the human eye area 30 is included in the visible light image according to the matching area.
  • the processor 14 may be used to execute the methods in 011232, 011234, and 011236.
  • the processor 14 can be used to: obtain the gray parameters of the visible light image and the gray parameters of the human eye template image respectively; traverse the visible light image according to the gray parameters of the human eye template image to obtain the matching area in the visible light image , Wherein the gray scale parameter of the matching area matches the gray scale parameter of the human eye template image; it is determined whether the visible light image includes the human eye area 30 according to the matching area.
  • 0111, 01131, 01121, 01122, 01132, 01124, and 0114 in Figure 11 can refer to the description of 011, 0631, 0821, 0922, 0632, 0824 and 014 in the specification of this application. This will not be repeated here.
  • the processor 14 may divide the visible light image P0 into multiple sub-images, and then obtain the grayscale parameters of each sub-image. Similarly, the processor 14 may also obtain the human eye template images P1 and P2. , P3 « The gray scale parameters.
  • the gray scale parameter may include the gray value and the image mean value, and the image mean value is the normalized gray value of the image.
  • the processor 14 obtains the gray scale parameters of the multiple sub-images of the visible light image P0 and the gray scale parameters of the human eye template image P1, and then searches the multiple sub-images of the visible light image P0 according to the gray parameters of the human eye template image P1, and
  • the matching area where the human eye template image P1 matches on the gray scale parameter, and the matching area may include one or more sub-images. When multiple sub-images are connected, multiple sub-images constitute a matching area; when multiple sub-images are separated, the number of matching areas is multiple, and each sub-image is used as a matching area.
  • the processor 14 When the processor 14 cannot obtain the matching area in the visible light image P0 according to the gray parameters of the human eye template image P1, the processor 14 continues to obtain the gray parameters of the human eye template image P2, and then according to the gray parameters of the human eye template image P2 The parameter searches for the matching area in the multiple sub-images of the visible light image P0 that matches the human eye template image P2 on the gray scale parameters.
  • the processor 14 When the processor 14 cannot obtain the matching area in the visible light image P0 according to the gray parameters of the human eye template image P2, the processor 14 continues to obtain the gray parameters of the human eye template image P3, and then according to the gray parameters of the human eye template image P3 The parameter searches for a matching area in the multiple sub-images of the visible light image P0 that matches the human eye template image P3 on the gray scale parameters.
  • the processor 14 determines that there is no matching area in the visible light image P0, and at this time, the processor 14 determines that The human eye area 30 is not included in the visible light image P0.
  • the processor 14 can obtain the matching area in the visible light image P0 according to the gray scale parameters of any human eye template image, it will determine that the visible light image P0 includes the human eye area 30 according to the matching area. At this time, the matching area can be regarded as the human eye Area 30.
  • the processor 14 judging whether the visible light image P0 includes the human eye area 30 according to the matching area includes: when there is no matching area in the visible light image P0, determining that the visible light image P0 does not include the human eye area 30; When there is a matching area in P0, it is determined that the human eye area 30 is included in the visible light image P0.
  • the processor 14 may also add other restrictive conditions when there is a matching area in the visible light image P0 to further determine whether the visible light image P0 includes the human eye area 30 according to the matching area, such as the number of matching areas, etc., in a single person At this time, if the number of matching regions is greater than 2, the judgment of the matching regions may be problematic. At this time, the human eye region 30 may not be included in the visible light image P0.
  • control method of the electronic device 10 further includes:
  • 012242 Mark the matching area in the visible light image
  • 012244 Obtain the edge pixel position of the matching area on the visible light sensor 13;
  • 012322 Perform a coordinate comparison between the target pixel position 22 and the edge pixel position to determine whether the human eye enters the zero-level area 20.
  • the processor 14 may be used to execute the methods in 0242, 0244, and 0322.
  • the processor 14 can be used to: mark the matching area in the visible light image; obtain the edge pixel position of the matching area on the visible light sensor 13; compare the target pixel position 22 with the edge pixel position to determine the human eye Whether to enter the zero-level zone 20.
  • marking the matching area in the visible light image may adopt a rectangular frame, a ring frame, or any irregular shape frame. Marking the matching area can make the edges of the matching area easier to identify.
  • the matching area is the human eye area 30, and the human eyes (pixels) include edges (pixels).
  • Obtaining the edge pixel position of the matching area on the visible light sensor 13 is to obtain the human eye area 30 on the visible light sensor 13.
  • the pixel position on the edge In other examples, the area of the matching area may be larger than the area of the human eye area 30, that is, the matching area covers the human eye area 30, which is not limited here.
  • the target pixel position 22 is compared with the edge pixel position to determine whether the human eye enters the zero-level area 20. Compared with the coordinate comparison between the target pixel position 22 and the human eye pixel position 31, the processor 14 can be reduced.
  • the amount of calculation and the judgment based on the edge pixel position can ensure the safety of human eyes.
  • the human eye includes a pupil
  • the control method of the electronic device 10 further includes:
  • 01321 Obtain a visible light image through the visible light sensor 13;
  • 01326 Determine the eye contour according to the preliminary pixel position
  • 01327 Determine the pupil pixel position 32 according to the human eye contour
  • 013324 Perform a coordinate comparison between the target pixel position 22 and the pupil pixel position 32 to determine whether the pupil enters the zero-level area 20;
  • the human eye includes a pupil.
  • the visible light sensor 13 can be used to execute the methods in 01321, and the processor 14 can be used to execute the methods in 01325, 01326, 01327, 013324, and 01341.
  • the visible light sensor 13 can be used to obtain a visible light image.
  • the processor 14 may be used to: when the visible light image includes a human face area, determine the preliminary pixel position of the human eye area 30 on the visible light sensor 13 according to the human face area; determine the human eye contour according to the preliminary pixel position; determine the human eye contour according to the preliminary pixel position.
  • the pupil pixel position 32; the target pixel position 22 and the pupil pixel position 32 are coordinated to determine whether the pupil enters the zero-order area 20; when the pupil enters the zero-order area 20, the protection mechanism of the infrared emitter 11 is triggered.
  • the processor 14 can recognize whether the visible light image includes a human face area through face detection technology.
  • the visible light image includes a human face area
  • the distribution law of the human eye area 30 in the face area This distribution law can be obtained by a large number of experimental statistics.
  • the human eye area 30 is generally located at the upper left and upper right corners of the face area and the aspect ratio is generally a certain value, etc.) It can be preliminarily determined that the human eye area 30 is on the visible light sensor 13 Then, the eye contour is further accurately determined, and the pupil pixel position 32 is determined according to the human eye contour, and the pupil pixel position 32 is used to compare the coordinates with the target pixel position 22 to determine whether the pupil enters the zero-level area 20 .
  • the pupil serves as a channel for light to enter the eye, and it is more accurate and rigorous to use this method to determine whether the human eye enters the zero-level area 20.
  • control method of the electronic device 10 further includes:
  • 014261 Use the classifier to locate the left eye area and/or the right eye area in the preliminary pixel position;
  • 014263 Perform edge detection on the left eye area and/or right eye area to determine the contour of the human eye
  • 014271 Determine whether the human eye contour is a predetermined shape
  • 014273 Use the least square method to fit the point set to determine the pupil pixel position 32.
  • the processor 14 may be used to execute the methods in 014261, 014262, 014263, 014271, 014272, and 014273.
  • the processor 14 can be used to: use the classifier to locate the left-eye area and/or the right-eye area in the preliminary pixel positions; perform binarization processing on the face area; Perform edge detection on the area to determine the contour of the human eye; determine whether the contour of the human eye is a predetermined shape; when the contour of the human eye is a predetermined shape, convert the contour of the human eye into a point set; use the least square method to fit the point set to determine the pupil Pixel position 32.
  • 0141, 01431, 01421, 01425, 014324, and 01441 in FIG. 14 can be referred to the description of 011, 0631, 01321, 01325, 013324 and 01341 in the specification of this application, which will not be repeated here.
  • the processor 14 may use an eye classifier/eye training model to locate the left eye area and/or the right eye area. It can be understood that when the visible light image only captures the left eye, the left eye area is located; when the visible light image only captures the right eye, the right eye area is located; when the visible light image captures the left and right eyes, the left eye area is located Eye area and right eye area. Binarization of the face area can make the contour of the human eye more prominent, which is beneficial to the subsequent edge detection to determine the contour of the human eye. After performing edge detection on the left eye area and/or the right eye area to determine the contour of the human eye, the processor 14 further determines whether the contour of the human eye is a predetermined shape, such as an arc.
  • a predetermined shape such as an arc.
  • the processor 14 converts the contour of the human eye into a point set, and uses the least square method to curve-fit the point set to obtain the arc center, which is the pupil pixel position 32.
  • control method of the electronic device 10 further includes:
  • 01542 When the human eye enters the zero-level zone 20, reduce at least one of the current, pulse width, or frame rate of the infrared transmitter 11; or
  • 01643 When the human eye enters the zero-level zone 20, turn off the infrared transmitter 11.
  • the processor 14 may be used to execute the method in 01542 or 01643.
  • the processor 14 can be used to: reduce at least one of the current, pulse width, or frame rate of the infrared transmitter 11 when the human eye enters the zero-level area 20; or when the human eye enters the zero-level area At 20 o'clock, turn off the infrared transmitter 11.
  • the content and specific implementation details of 0151, 0152 and 0153 in Figure 15 can refer to the description of 011, 012 and 013 in the specification of this application, and the content and specific implementation details of 0161, 0162 and 0163 in Figure 16 can refer to this The description of 011, 012 and 013 in the application specification will not be repeated here.
  • the processor 14 may reduce at least one of the current, the pulse width, or the frame rate of the infrared transmitter 11 to protect the safety of the human eye.
  • the processor 14 can reduce the current of the infrared emitter 11 emitting laser light (as shown in FIG. 17), so that the current of the infrared emitter 11 is reduced to 1/2 of the current of the infrared emitter 11 when the human eye does not enter the zero-level zone 20. , 1/3, 1/4, etc.; or, the processor 14 can reduce the pulse width of the laser emitted by the infrared transmitter 11 (as shown in FIG.
  • the pulse width of the infrared transmitter 11 is 1/2, 1/3, 1/4, etc.; or, the processor 14 can reduce the frame rate of the laser emitted by the infrared emitter 11 (as shown in FIG. 19), so that the infrared emission
  • the frame rate of the infrared transmitter 11 is reduced to 1/2, 1/3, 1/4, etc. of the frame rate of the infrared transmitter 11 when the human eye does not enter the zero-level zone 20; or the processor 14 can reduce the current of the infrared transmitter 11,
  • the pulse width and frame rate are not listed here.
  • the processor 14 can also directly turn off the infrared transmitter 11 to protect human eye safety.
  • control method of the electronic device 10 further includes:
  • 02044 Prompt the user when the human eye enters the zero-level zone 20.
  • the processor 14 may be used to execute the method in 02044.
  • the processor 14 may be used to prompt the user when the human eye enters the zero-level area 20.
  • the processor 14 prompts the user when the human eye enters the zero-level area 20, which may be to prompt the user to turn off the infrared transmitter 11, or to prompt the user to move the head so that the human eye is outside the zero-level area 20. More specifically, the processor 14 may also prompt the user's head movement direction, such as to the left or right, according to the position of the human eye in the zero-level area 20 when the human eye enters the zero-level area 20, so that the human eye can move. To the zero level area 20 outside.
  • the processor 14 may also reduce at least one of the current, pulse width, or frame rate of the infrared transmitter 11 when the human eye enters the zero-level zone 20, and prompt the user at the same time; or, the processor 14 may enter When the zero-level zone is 20, turn off the infrared transmitter 11 and prompt the user at the same time.
  • the embodiment of the present application also provides a non-volatile computer-readable storage medium 50 containing computer-readable instructions.
  • the processor 14 is caused to execute the control method of the electronic device 10 in any one of the foregoing embodiments.
  • the processor 14 when a computer-readable instruction is executed by the processor 14, the processor 14 is caused to execute the following control method of the electronic device 10:
  • 011 Obtain the original pixel position 21 of the zero-level area 20 on the infrared sensor 12;
  • 012 Obtain the human eye pixel position 31 of the human eye area 30 on the visible light sensor 13;
  • 013 Determine whether the human eye enters the zero-level area 20 according to the original pixel position 21 and the human eye pixel position 31;
  • the processor 14 when the computer-readable instructions are executed by the processor 14, the processor 14 is caused to execute the following control method of the electronic device 10:
  • 0632 Perform a coordinate comparison between the target pixel position 22 and the human eye pixel position 31 to determine whether the human eye enters the zero-level area 20.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Ophthalmology & Optometry (AREA)
  • Signal Processing (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)
  • Position Input By Displaying (AREA)
  • Image Analysis (AREA)

Abstract

一种电子设备(10)的控制方法、电子设备(10)和计算机可读存储介质(50)。电子设备(10)包括红外发射器(11)、红外传感器(12)和可见光传感器(13)。控制方法包括:(011)获取零级区域(20)在红外传感器(12)上的原始像素位置(21);(012)获取人眼区域(30)在可见光传感器(13)上的人眼像素位置(31);(013)根据原始像素位置(21)和人眼像素位置(31)判断人眼是否进入零级区域(20);(014)在人眼进入零级区域(20)时,触发红外发射器(11)的保护机制。

Description

电子设备的控制方法、电子设备和计算机可读存储介质 技术领域
本申请涉及消费性电子产品领域,特别涉及一种电子设备的控制方法、电子设备和计算机可读存储介质。
背景技术
随着电子技术的快速发展,诸如智能手机、平板电脑等电子设备已经越来越普及。电子设备上可以安装结构光投射器和结构光摄像头,结构光投射器用于向目标空间中投射激光图案,结构光摄像头用于采集激光图案以获取深度信息。
发明内容
本申请实施方式提供一种电子设备的控制方法、电子设备和计算机可读存储介质。
本申请实施方式提供一种电子设备的控制方法,所述电子设备包括红外发射器、红外传感器和可见光传感器,所述控制方法包括:获取零级区域在所述红外传感器上的原始像素位置;获取人眼区域在所述可见光传感器上的人眼像素位置;根据所述原始像素位置和所述人眼像素位置判断人眼是否进入所述零级区域;在人眼进入所述零级区域时,触发所述红外发射器的保护机制。
本申请实施方式提供一种电子设备,所述电子设备包括红外发射器、红外传感器、可见光传感器和处理器,所述处理器用于:获取零级区域在所述红外传感器上的原始像素位置;获取人眼区域在所述可见光传感器上的人眼像素位置;根据所述原始像素位置和所述人眼像素位置判断人眼是否进入所述零级区域;在人眼进入所述零级区域时,触发所述红外发射器的保护机制。
本申请实施方式提供一种包含计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被处理器执行时,使得所述处理器执行上述实施方式的电子设备的控制方法。
本申请实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的电子设备的控制方法的流程示意图;
图2是本申请实施方式的电子设备的结构示意图;
图3是本申请某些实施方式的电子设备的控制方法的应用场景示意图;
图4是本申请某些实施方式的电子设备的控制方法的应用场景示意图;
图5是本申请某些实施方式的电子设备的控制方法的应用场景示意图;
图6是本申请某些实施方式的电子设备的控制方法的流程示意图;
图7是本申请某些实施方式的电子设备的控制方法的应用场景示意图;
图8是本申请某些实施方式的电子设备的控制方法的流程示意图;
图9是本申请某些实施方式的电子设备的控制方法的流程示意图;
图10是本申请某些实施方式的电子设备的控制方法的应用场景示意图;
图11是本申请某些实施方式的电子设备的控制方法的流程示意图;
图12是本申请某些实施方式的电子设备的控制方法的流程示意图;
图13是本申请某些实施方式的电子设备的控制方法的流程示意图;
图14是本申请某些实施方式的电子设备的控制方法的流程示意图;
图15是本申请某些实施方式的电子设备的控制方法的流程示意图;
图16是本申请某些实施方式的电子设备的控制方法的流程示意图;
图17是本申请实施方式的电子设备的红外发射器降低电流以投射激光的示意图;
图18是本申请实施方式的电子设备的红外发射器降低脉宽以投射激光的示意图;
图19是本申请实施方式的电子设备的红外发射器降低帧率以投射激光的示意图;
图20是本申请某些实施方式的电子设备的控制方法的流程示意图;
图21是本申请某些实施方式的计算机可读存储介质与电子设备的交互示意图。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中,相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的实施方式的限制。
请一并参阅图1至图3,本申请实施方式提供一种电子设备10的控制方法。电子设备10包括红外发射器11、红外传感器12和可见光传感器13。控制方法包括:
011:获取零级区域20在红外传感器12上的原始像素位置21;
012:获取人眼区域30在可见光传感器13上的人眼像素位置31;
013:根据原始像素位置21和人眼像素位置31判断人眼是否进入零级区域20;
014:在人眼进入零级区域20时,触发红外发射器11的保护机制。
请参阅图2,本申请实施方式还提供一种电子设备10。电子设备10包括红外发射器11、红外传感器12、可见光传感器13和处理器14。本申请实施方式的电子设备10的控制方法可由本申请实施方式的电子设备10实现。例如,处理器14可用于执行011、012、013和014中的方法。
也即是说,处理器14可以用于:获取零级区域20在红外传感器12上的原始像素位置21;获取人眼区域30在可见光传感器13上的人眼像素位置31;根据原始像素位置21和人眼像素位置31判断人眼是否进入零级区域20;在人眼进入零级区域20时,触发红外发射器11的保护机制。
可以理解,随着电子技术的快速发展,搭载3D结构光的手机越来越多,3D结构光主要用于前置人脸解锁和背景虚化。3D结构光利用激光源向空间物体投射固定图案的散斑,同时利用带有特定透过波段的红外相机拍摄散斑,再通过各个散斑点的位置信息判断物体的深度和距离。
激光源由垂直腔面发射激光器(Vertical Cavity Surface Emitting Lase,VCSEL)、准直透镜和衍射光学元件(Diffractive Optical Elements,DOE)构成,VCSEL发射的激光波长为940nm,先经过准直透镜的准直,再通过DOE的衍射将光束整形并均匀的投射到空间,形成散斑场。DOE衍射图案是通过多区复制实现的,在使用过程中,DOE在外力作用下受损或者在潮湿环境水雾进入激光源并附着在DOE的表面等情况会导致DOE零级区域的能量显著增强,对人眼造成伤害。
请结合图4和图5,图4是激光源在无零级加强时投射的散斑图,正常使用时,散斑不会对人眼造成伤害。图5是激光源在零级加强时投射的散斑图,若零级区域的散斑投射到人眼,会对人眼造成伤害,而非零级区域的散斑投射到人眼,不会对人眼造成伤害。
本申请实施方式的电子设备10的控制方法和电子设备10,根据零级区域20在红外传感器12上的原始像素位置21和人眼区域30在可见光传感器13上的人眼像素位置31来判断人眼是否进入零级区域20,从而在人眼进入零级区域20时,触发红外发射器11的保护机制,可以避免由 于零级光束过强而对人眼造成伤害。
请参阅图2,电子设备10可以是手机、平板电脑、笔记本电脑、智能手环、智能手表等。本申请实施方式以电子设备10是手机为例进行说明,可以理解,电子设备10的具体形式可以是其他,在此不作限制。
电子设备10还可包括显示屏15和壳体16。红外发射器11、红外传感器12、可见光传感器13和显示屏15均可以安装在壳体16上。壳体16包括正面161和背面162,正面161与背面162相背。正面161用于安装显示屏15,显示屏15可用于显示图像、文字等信息,或接收用户的触控操作。红外发射器11、红外传感器12和可见光传感器13可以安装在正面161,以便用户进行自拍或进行视频通话等;红外发射器11、红外传感器12和可见光传感器13也可以安装在背面162,以便于拍摄景物及他人等。
在一个实施例中,红外发射器11和红外传感器12可构成结构光(Structured Light)模组。红外发射器11用于向目标空间中投射激光图案,红外传感器12用于采集激光图案以获取深度信息。在另一个实施例中,红外发射器11和红外传感器12可构成飞行时间(Time of flight,TOF)模组。红外发射器11用于向目标空间中发射激光,红外传感器12用于接收被反射的激光,并根据发射和接收的时间差获取深度信息。
可见光传感器13可构成可见光模组。可见光模组可以是长焦相机、广角相机或潜望式相机等。可见光传感器13可位于红外发射器11与红外传感器12之间,以使红外发射器11与红外传感器12之间具有较远的距离,提高结构光模组或TOF模组的基线(base line)长度,从而提高获取深度信息的准确性。
目前的结构光模组均是用作为前置,另外,还需要设置一个距离传感器来判断距离,以保护人眼安全。若是将结构光模组用作后置,存在如下缺点:1.需要增设一个距离传感器来判断距离以保护人眼安全(在距离近时采用小功率投射激光,在距离远时采用正常功率投射激光),导致成本增加;2.会占用手机背部的空间,不利于产品设计;3.近距离微小物体建模受到距离传感器的限制;4.非眼睛和皮肤类物体靠近,也会触发人眼保护,交互体验不好。
本申请实施方式利用人眼区域30在可见光传感器13上的人眼像素位置31和零级区域20的零级区域20在红外传感器12上的原始像素位置21,来判断人眼是否进入零级区域20。若人眼未进入零级区域20,则红外发射器11正常工作;若人眼进入零级区域20,则触发红外发射器11的保护机制,调整红外发射器11的工作模式,以避免由于零级光束过强而对人眼造成伤害。本申请实施方式的结构光模组或TOF模组既可以用于前置,也可以用于后置,且用在后置时,不需要增设距离传感器,节省了成本;同时给电子设备10背部节省了空间;也可以进行非眼睛和皮肤物体的近距离建模。
其中,请参阅图2及图3,人眼区域30在可见光传感器13上的人眼像素位置31指的是人眼区域30在可见光传感器13上的坐标位置。同样地,零级区域20在红外传感器12上的原始像素位置21指的是零级区域20在红外传感器12上的坐标位置,该坐标位置一般是固定不变的,处理器14可以在用户使用电子设备10之前预先获取并存储,并采用预定周期更新该坐标位置,以免在长期使用过程中,零级区域20的位置发生偏移(可能由DOE的位置偏移导致,或是DOE受损导致)。预先获取零级区域20在红外传感器12上的原始像素位置21,可以节省电子设备10判断人眼是否进入零级区域20所需要的时间。当然,处理器14也可以同时获取零级区域20在红外传感器12上的原始像素位置21、以及获取人眼区域30在可见光传感器13上的人眼像素位置31;或者,处理器14先获取人眼区域30在可见光传感器13上的人眼像素位置31,再获取零级区域20在红外传感器12上的原始像素位置21。
处理器14获取零级区域20在红外传感器12上的原始像素位置21的过程可以如下:先确定红外发射器11的零级区域20的位置,具体可通过在红外发射器11的出光口均匀设置多个光检测元件,例如光电二极管(Photo-Diode,PD),以检测红外发射器11的出光口的各个位置投射的激光强度,激光强度最强的位置即是零级区域20的位置;由于红外传感器12与可见光传感器13的相对位置是固定不变的,因此,可根据红外发射器11的零级区域20的位置转换得到零级区域20在红外传感器12上的原始像素位置21。
请参阅图6,在某些实施方式中,电子设备10的控制方法还包括:
0631:将原始像素位置21转换为零级区域20在可见光传感器13上的目标像素位置22;
0632:将目标像素位置22与人眼像素位置31进行坐标比对以判断人眼是否进入零级区域20。
请参阅图2,在某些实施方式中,处理器14可用于执行0631和0632中的方法。
也即是说,处理器14可以用于:将原始像素位置21转换为零级区域20在可见光传感器13上的目标像素位置22;将目标像素位置22与人眼像素位置31进行坐标比对以判断人眼是否进入零级区域20。
其中,图6中的061、062和064的内容及具体实施细节,可以参照本申请说明书中对011、012和014的描述,在此不再赘述。
请参阅图2和图3,由于在电子设备10组装好之后,红外传感器12与可见光传感器13之间的距离是固定不变的,因此,红外传感器12与可见光传感器13之间的距离可将在红外传感器12上的原始像素位置21转换为在可见光传感器13上的目标像素位置22。
处理器14在获取零级区域20在红外传感器12上的原始像素位置21之后,将原始像素位置21转换为零级区域20在可见光传感器13上的目标像素位置22,以便于同在可见光传感器13的坐标下,进行人眼区域30的人眼像素位置31与零级区域20的目标像素位置22的坐标比对,从而判断人眼是否进入零级区域20。与预先获取零级区域20在红外传感器12上的原始像素位置21类似地,处理器14也可以预先获取并存储零级区域20在可见光传感器13上的目标像素位置22,并采用预定周期更新该坐标位置,以节省电子设备10判断人眼是否进入零级区域20所需要的时间,同时保证获得的目标像素位置22的准确性。
当然,在其他实施方式中,处理器14也可以将人眼区域30在可见光传感器13上的人眼像素位置31转换为人眼区域30在红外传感器12上的目标像素位置31,以同在红外传感器12的坐标下,进行人眼区域30的目标像素位置31与零级区域20的原始像素位置21的坐标比对,从而判断人眼是否进入零级区域20。
请参阅图3和7,判断人眼进入零级区域20内包括人眼像素位置31的坐标范围完全落入目标像素位置22的坐标范围内(如图3所示),同时也包括人眼像素位置31的坐标范围部分落入目标像素位置22的坐标范围内(如图7所示)。在一个例子中,处理器14可根据人眼像素位置31与目标像素位置22的重叠部分占人眼像素位置31的比例来判断人眼是否进入零级区域20,若该比例大于预定比例,例如60%,则判断为人眼进入零级区域20内,该判断方式可以更大程度上确保人眼安全。
请参阅图8,在某些实施方式中,电子设备10的控制方法还包括:
0821:通过可见光传感器13获取可见光图像;
0824:在可见光图像中包括人眼区域30时,获取人眼区域30在可见光传感器13上的人眼像素位置31。
请参阅图2,在某些实施方式中,可见光传感器13可用于执行0821中的方法,处理器14可用于执行0824中的方法。
也即是说,可见光传感器13可以用于获取可见光图像。处理器14可以用于在可见光图像中包括人眼区域30时,获取人眼区域30在可见光传感器13上的人眼像素位置31。
其中,图8中的081、0831、0832和084的内容及具体实施细节,可以参照本申请说明书中对011、0631、0632和014的描述,在此不再赘述。
具体地,在一个例子中,通过可见光传感器13获取可见光图像可以是在红外发射器11开启时执行。也即是说,在电子设备10根据用户的控制或根据需要开启红外发射器11时,可见光传感器13同步开启以采集可见光图像,有利于更快速地获取人眼区域30在可见光传感器13上的人眼像素位置31,从而可以节省电子设备10判断人眼是否进入零级区域20所需要的时间。若是人眼未进入零级区域20,也可节省电子设备10获取深度信息的时间。
在另一个例子中,在人眼未进入零级区域20时,红外发射器11开启。也即是说,可见光传感器13先开启以采集可见光图像,以判断人眼是否进入零级区域20,在人眼未进入零级区域20时,红外发射器11才开启,以充分保护人眼。
请参阅图9,在某些实施方式中,电子设备10包括人眼模型库,电子设备10的控制方法还包括:
0922:读取可见光图像和人眼模板图像;
0923:计算可见光图像与人眼模板图像的相关性以判断可见光图像中是否包括人眼区域30。
请参阅图2,在某些实施方式中,电子设备10包括人眼模型库,人眼模型库中存储有人眼模板图像。处理器14可用于执行0922和0923中的方法。
也即是说,处理器14可以用于:读取可见光图像和人眼模板图像;计算可见光图像与人眼模板图像的相关性以判断可见光图像中是否包括人眼区域30。
其中,图9中的091、0931、0921、0924、0932和094的内容及具体实施细节,可以参照本申请说明书中对011、0631、0821、0824、0632和014的描述,在此不再赘述。
具体地,以图10为例,可见光传感器13获取的可见光图像为P0。人眼模型库中可存储有多个人眼模板图像,分别为P1、P2、P3等等,其中,多个人眼模板图像中可包括左眼图像和右眼图像。首先,处理器14读取可见光图像P0和人眼模板图像P1,然后计算可见光图像P0与人眼模板图像P1的相关性以判断可见光图像中是否包括人眼区域30。若是,则判断为可见光图像中包括人眼区域30;若否,则处理器14继续读取人眼模板图像P2,然后计算可见光图像P0与人眼模板图像P2的相关性以判断可见光图像中是否包括人眼区域30。若是,则判断为可见光图像中包括人眼区域30;若否,则处理器14继续读取人眼模板图像P3,然后计算可见光图像P0与人眼模板图像P3的相关性以判断可见光图像中是否包括人眼区域30。依次类推,直至处理器14根据可见光图像P0与每一人眼模板图像的相关性均判断可见光图像中不包括人眼区域30时,才最终判断为可见光图像中不包括人眼区域30。若处理器14根据可见光图像P0与任一人眼模板图像的相关性判断可见光图像中包括人眼区域30,则最终判断为可见光图像中包括人眼区域30,并停止读取剩下的人眼模板图像、以及计算可见光图像P0与剩下的人眼模板图像的相关性以判断可见光图像中是否包括人眼区域30。
在计算可见光图像与人眼模板图像的相关性之后,处理器14可根据可见光图像与人眼模板图像的相关性的数值来判断可见光图像中是否包括人眼区域30。当可见光图像与人眼模板图像越相似时,相关性的数值越高。因此,处理器14可在相关性的数值大于(或等于)预定数值时,判断可见光图像中包括人眼区域30;在相关性的数值小于(或等于)预定数值时,判断可见光图像中不包括人眼区域30。
请参阅图11,在某些实施方式中,电子设备10的控制方法还包括:
011232:分别获取可见光图像的灰度参数和人眼模板图像的灰度参数;
011234:根据人眼模板图像的灰度参数遍历可见光图像以获取可见光图像中的匹配区域,其中,匹配区域的灰度参数与人眼模板图像的灰度参数匹配;
011236:根据匹配区域判断可见光图像中是否包括人眼区域30。
请参阅图2,在某些实施方式中,处理器14可用于执行011232、011234和011236中的方法。
也即是说,处理器14可以用于:分别获取可见光图像的灰度参数和人眼模板图像的灰度参数;根据人眼模板图像的灰度参数遍历可见光图像以获取可见光图像中的匹配区域,其中,匹配区域的灰度参数与人眼模板图像的灰度参数匹配;根据匹配区域判断可见光图像中是否包括人眼区域30。
其中,图11中的0111、01131、01121、01122、01132、01124和0114的内容及具体实施细节,可以参照本申请说明书中对011、0631、0821、0922、0632、0824和014的描述,在此不再赘述。
具体地,仍以图10为例,处理器14可将可见光图像P0划分为多个子图像,然后获取每个子图像的灰度参数,同样地,处理器14也可以获取人眼模板图像P1、P2、P3……的灰度参数。灰度参数可包括灰度值和图像均值,图像均值即图像归一化后的灰度值。
首先,处理器14获取可见光图像P0的多个子图像的灰度参数以及人眼模板图像P1的灰度参数,然后根据人眼模板图像P1的灰度参数查找可见光图像P0的多个子图像中,与人眼模板图像P1在灰度参数上匹配的匹配区域,匹配区域可包括一个或多个子图像。当多个子图像相接时,多个子图像构成一个匹配区域;当多个子图像间隔时,匹配区域的数量为多个,每个子图像作为一个匹配区域。当处理器14根据人眼模板图像P1的灰度参数无法在可见光图像P0中获取匹配区域时,处理器14继续获取人眼模板图像P2的灰度参数,然后根据人眼模板图像P2的灰度参数查找可见光图像P0的多个子图像中,与人眼模板图像P2在灰度参数上匹配的匹配区域。当处理器14根据人眼模板图像P2的灰度参数无法在可见光图像P0中获取匹配区域时,处理器14继续获取人眼模板图像P3的灰度参数,然后根据人眼模板图像P3的灰度参数查找可见光图像P0的多个子图像中,与人眼模板图像P3在灰度参数上匹配的匹配区域。依次类推,直至处理器14根据所有的人眼模板图像的灰度参数均无法在可见光图像P0中获取匹配区域时,则判断为可见光图像P0中不存在匹配区域,此时,处理器14判断为可见光图像P0中不包括人眼区域30。若处理器14根据任一人眼模板图像的灰度参数能够在可见光图像P0中获取到匹配区域,则根据匹配区域判断可见光图像P0中包括人眼区域30,此时,匹配区域即可作为人眼区域30。
需要指出的是,处理器14根据匹配区域判断可见光图像P0中是否包括人眼区域30包括:在可见光图像P0中不存在匹配区域时,判断可见光图像P0中不包括人眼区域30;在可见光图像P0中存在匹配区域时,判断可见光图像P0中包括人眼区域30。或者,处理器14还可以在可见光图像P0中存在匹配区域时,增加其他限定条件以进一步根据匹配区域来判断可见光图像P0中是否包括人眼区域30,例如匹配区域的数量等,在单人拍照时,若匹配区域的数量大于2,则匹配区域的判断可能是存在问题的,此时,可见光图像P0中可能不包括人眼区域30。
请参阅图12,在某些实施方式中,电子设备10的控制方法还包括:
012242:在可见光图像中标示匹配区域;
012244:获取匹配区域在可见光传感器13上的边缘像素位置;
012322:将目标像素位置22与边缘像素位置进行坐标比对以判断人眼是否进入零级区域20。
请参阅图2,在某些实施方式中,处理器14可用于执行0242、0244和0322中的方法。
也即是说,处理器14可以用于:在可见光图像中标示匹配区域;获取匹配区域在可见光传 感器13上的边缘像素位置;将目标像素位置22与边缘像素位置进行坐标比对以判断人眼是否进入零级区域20。
其中,图12中的0121、01231、01221、01222、012232、012234、012236和0124的内容及具体实施细节,可以参照本申请说明书中对011、0631、0821、0922、011232、011234、011236和014的描述,在此不再赘述。
具体地,标示可见光图像中的匹配区域可以采用矩形框、环形框、或任意不规则形状框。标示匹配区域可以使得匹配区域的边缘更容易识别。在图10的示例中,匹配区域即是人眼区域30,人眼(像素)包括边缘(像素),获取匹配区域在可见光传感器13上的边缘像素位置即是获取人眼区域30在可见光传感器13上的边缘像素位置。在其他例子中,匹配区域的面积可大于人眼区域30的面积,即匹配区域涵盖人眼区域30,在此不作限制。将目标像素位置22与边缘像素位置进行坐标比对以判断人眼是否进入零级区域20,相较于将目标像素位置22与人眼像素位置31进行坐标比对而言,可以减少处理器14的计算量,且根据边缘像素位置进行判断更能确保人眼安全。
请参阅图3和图13,在某些实施方式中,人眼包括瞳孔,电子设备10的控制方法还包括:
01321:通过可见光传感器13获取可见光图像;
01325:在可见光图像中包括人脸区域时,根据人脸区域确定人眼区域30在可见光传感器13上的初步像素位置;
01326:根据初步像素位置确定人眼轮廓;
01327:根据人眼轮廓确定瞳孔像素位置32;
013324:将目标像素位置22与瞳孔像素位置32进行坐标比对以判断瞳孔是否进入零级区域20;
01341:在瞳孔进入零级区域20时,触发红外发射器11的保护机制。
请参阅图2,在某些实施方式中,人眼包括瞳孔。可见光传感器13可用于执行01321中的方法,处理器14可用于执行01325、01326、01327、013324和01341中的方法。
也即是说,可见光传感器13可以用于获取可见光图像。处理器14可以用于:在可见光图像中包括人脸区域时,根据人脸区域确定人眼区域30在可见光传感器13上的初步像素位置;根据初步像素位置确定人眼轮廓;根据人眼轮廓确定瞳孔像素位置32;将目标像素位置22与瞳孔像素位置32进行坐标比对以判断瞳孔是否进入零级区域20;在瞳孔进入零级区域20时,触发红外发射器11的保护机制。
其中,图13中的0131和01331内容及具体实施细节,可以参照本申请说明书中对011和0631的描述,在此不再赘述。
具体地,请参阅图3,处理器14可通过人脸检测技术识别可见光图像中是否包括人脸区域,当可见光图像中包括人脸区域时,根据人眼区域30在人脸区域中分布规律(该分布规律可由大量实验统计得到,例如人眼区域30一般是位于人脸区域的左上角和右上角且长宽占比一般为某某数值等)可初步确定人眼区域30在可见光传感器13上的初步像素位置,然后再进一步精确确定人眼轮廓,再根据人眼轮廓判断瞳孔像素位置32,并利用瞳孔像素位置32与目标像素位置22进行坐标比对,以判断瞳孔是否进入零级区域20。瞳孔作为光线进入眼睛的通道,采用该方式判断人眼是否进入零级区域20较为准确和严谨。
请参阅图3和图14,在某些实施方式中,电子设备10的控制方法还包括:
014261:利用分类器在初步像素位置中定位左眼区域和/或右眼区域;
014262:对人脸区域进行二值化处理;
014263:对左眼区域和/或右眼区域进行边缘检测以确定人眼轮廓;
014271:判断人眼轮廓是否为预定形状;
014272:在人眼轮廓为预定形状时,将人眼轮廓转化为点集合;
014273:利用最小二乘法对点集合进行拟合以确定瞳孔像素位置32。
请参阅图2,在某些实施方式中,处理器14可用于执行014261、014262、014263、014271、014272和014273中的方法。
也即是说,处理器14可以用于:利用分类器在初步像素位置中定位左眼区域和/或右眼区域;对人脸区域进行二值化处理;对左眼区域和/或右眼区域进行边缘检测以确定人眼轮廓;判断人眼轮廓是否为预定形状;在人眼轮廓为预定形状时,将人眼轮廓转化为点集合;利用最小二乘法对点集合进行拟合以确定瞳孔像素位置32。
其中,图14中的0141、01431、01421、01425、014324和01441内容及具体实施细节,可以参照本申请说明书中对011、0631、01321、01325、013324和01341的描述,在此不再赘述。
具体地,处理器14可利用眼睛分类器/眼睛训练模型来定位左眼区域和/或右眼区域。可以理解,当可见光图像仅拍摄到左眼时,则定位左眼区域;当可见光图像仅拍摄到右眼时,则定位右眼区域;当可见光图像拍摄到左眼和右眼时,则定位左眼区域和右眼区域。对人脸区域进行二值化处理可以使得人眼轮廓更加突出,有利于后续边缘检测以确定人眼轮廓。在对左眼区域和/或右眼区域进行边缘检测确定人眼轮廓后,处理器14还进一步判断人眼轮廓是否为预定形状,例如弧形。可以理解,当人眼轮廓不为预定形状,例如为方形或三角形时,则说明该轮廓实际上不是人眼,需要重新进行人眼轮廓检测。在人眼轮廓为预定形状时,处理器14将人眼轮廓转化为点集合,并利用最小二乘法对点集合进行曲线拟合以获得圆弧中心,即为瞳孔像素位置32。
请参阅图15和图16,在某些实施方式中,电子设备10的控制方法还包括:
01542:在人眼进入零级区域20时,降低红外发射器11的电流、脉宽、或帧率中的至少一种;或者
01643:在人眼进入零级区域20时,关闭红外发射器11。
请参阅图2,在某些实施方式中,处理器14可用于执行01542或者01643中的方法。
也即是说,处理器14可以用于:在人眼进入零级区域20时,降低红外发射器11的电流、脉宽、或帧率中的至少一种;或者在人眼进入零级区域20时,关闭红外发射器11。
其中,图15中的0151、0152和0153内容及具体实施细节,可以参照本申请说明书中对011、012和013的描述,图16中的0161、0162和0163内容及具体实施细节,可以参照本申请说明书中对011、012和013的描述,在此不再赘述。
具体地,在人眼进入零级区域20时,处理器14可降低红外发射器11的电流、脉宽、或帧率中的至少一种,以保护人眼安全。例如,处理器14可降低红外发射器11发射激光的电流(如图17所示),使得红外发射器11的电流降为人眼未进入零级区域20时红外发射器11的电流的1/2、1/3、1/4等;或者,处理器14可降低红外发射器11发射激光的脉宽(如图18所示),使得红外发射器11的脉宽降为人眼未进入零级区域20时红外发射器11的脉宽的1/2、1/3、1/4等;或者,处理器14可降低红外发射器11发射激光的帧率(如图19所示),使得红外发射器11的帧率降为人眼未进入零级区域20时红外发射器11的帧率的1/2、1/3、1/4等;或者,处理器14可降低红外发射器11的电流、脉宽和帧率,在此不一一列举。
当然,在人眼进入零级区域20时,处理器14也可直接关闭红外发射器11,以保护人眼安全。
请参阅图20,在某些实施方式中,电子设备10的控制方法还包括:
02044:在人眼进入零级区域20时,提示用户。
请参阅图2,在某些实施方式中,处理器14可用于执行02044中的方法。
也即是说,处理器14可以用于在人眼进入零级区域20时,提示用户。
其中,图20中的0201、0202和0203内容及具体实施细节,可以参照本申请说明书中对011、012和013的描述,在此不再赘述。
具体地,处理器14在人眼进入零级区域20时提示用户,可以是提示用户关闭红外发射器11,或者是提示用户移动头部以使得人眼处于零级区域20外。更具体地,处理器14还可以根据人眼进入零级区域20时人眼位于零级区域20中的位置,提示用户头部移动的方向,例如向左偏或右偏,以使得人眼移动至零级区域20外。
另外,处理器14还可以在人眼进入零级区域20时,降低红外发射器11的电流、脉宽、或帧率中的至少一种,同时提示用户;或者,处理器14在人眼进入零级区域20时,关闭红外发射器11,同时提示用户。
请参阅图21,本申请实施方式还提供一种包含计算机可读指令的非易失性计算机可读存储介质50。计算机可读指令被处理器14执行时,使得处理器14执行上述任一实施方式的电子设备10的控制方法。
例如,计算机可读指令被处理器14执行时,使得处理器14执行如下电子设备10的控制方法:
011:获取零级区域20在红外传感器12上的原始像素位置21;
012:获取人眼区域30在可见光传感器13上的人眼像素位置31;
013:根据原始像素位置21和人眼像素位置31判断人眼是否进入零级区域20;
014:在人眼进入零级区域20时,触发红外发射器11的保护机制。
又例如,计算机可读指令被处理器14执行时,使得处理器14执行如下电子设备10的控制方法:
0631:将原始像素位置21转换为零级区域20在可见光传感器13上的目标像素位置22;
0632:将目标像素位置22与人眼像素位置31进行坐标比对以判断人眼是否进入零级区域20。
在本说明书的描述中,参考术语“某些实施方式”、“一个例子中”、“示例地”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。

Claims (20)

  1. 一种电子设备的控制方法,其特征在于,所述电子设备包括红外发射器、红外传感器和可见光传感器,所述控制方法包括:
    获取零级区域在所述红外传感器上的原始像素位置;
    获取人眼区域在所述可见光传感器上的人眼像素位置;
    根据所述原始像素位置和所述人眼像素位置判断人眼是否进入所述零级区域;
    在人眼进入所述零级区域时,触发所述红外发射器的保护机制。
  2. 根据权利要求1所述的控制方法,其特征在于,所述根据所述原始像素位置和所述人眼像素位置判断人眼是否进入所述零级区域,包括:
    将所述原始像素位置转换为所述零级区域在所述可见光传感器上的目标像素位置;
    将所述目标像素位置与所述人眼像素位置进行坐标比对以判断人眼是否进入所述零级区域。
  3. 根据权利要求2所述的控制方法,其特征在于,所述获取人眼区域在所述可见光传感器上的人眼像素位置,包括:
    通过所述可见光传感器获取可见光图像;
    在所述可见光图像中包括所述人眼区域时,获取所述人眼区域在所述可见光传感器上的所述人眼像素位置。
  4. 根据权利要求3所述的控制方法,其特征在于,所述电子设备包括人眼模型库,所述人眼模型库中存储有人眼模板图像,所述获取人眼区域在所述可见光传感器上的人眼像素位置,还包括:
    读取所述可见光图像和所述人眼模板图像;
    计算所述可见光图像与所述人眼模板图像的相关性以判断所述可见光图像中是否包括所述人眼区域。
  5. 根据权利要求4所述的控制方法,其特征在于,所述计算所述可见光图像与所述人眼模板图像的相关性以判断所述可见光图像中是否包括所述人眼区域,包括:
    分别获取所述可见光图像的灰度参数和所述人眼模板图像的灰度参数;
    根据所述人眼模板图像的灰度参数遍历所述可见光图像以获取所述可见光图像中的匹配区域,其中,所述匹配区域的灰度参数与所述人眼模板图像的灰度参数匹配;
    根据所述匹配区域判断所述可见光图像中是否包括所述人眼区域。
  6. 根据权利要求5所述的控制方法,其特征在于,所述在所述可见光图像中包括所述人眼区域时,获取所述人眼区域在所述可见光传感器上的所述人眼像素位置,包括:
    在所述可见光图像中标示所述匹配区域;
    获取所述匹配区域在所述可见光传感器上的边缘像素位置;
    所述将所述目标像素位置与所述人眼像素位置进行坐标比对以判断人眼是否进入所述零级区域,包括:
    将所述目标像素位置与所述边缘像素位置进行坐标比对以判断人眼是否进入所述零级区域。
  7. 根据权利要求2所述的控制方法,其特征在于,人眼包括瞳孔,所述获取人眼区域在所述可见光传感器上的人眼像素位置,包括:
    通过所述可见光传感器获取可见光图像;
    在所述可见光图像中包括人脸区域时,根据所述人脸区域确定所述人眼区域在所述可见光传感器上的初步像素位置;
    根据所述初步像素位置确定人眼轮廓;
    根据所述人眼轮廓确定瞳孔像素位置;
    所述将所述目标像素位置与所述人眼像素位置进行坐标比对以判断人眼是否进入所述零级区域,包括:
    将所述目标像素位置与所述瞳孔像素位置进行坐标比对以判断瞳孔是否进入所述零级区域;
    所述在人眼进入所述零级区域时,触发所述红外发射器的保护机制,包括:
    在瞳孔进入所述零级区域时,触发所述红外发射器的保护机制。
  8. 根据权利要求7所述的控制方法,其特征在于,所述根据所述初步像素位置确定人眼轮廓,包括:
    利用分类器在所述初步像素位置中定位左眼区域和/或右眼区域;
    对所述人脸区域进行二值化处理;
    对所述左眼区域和/或所述右眼区域进行边缘检测以确定所述人眼轮廓;
    所述根据所述人眼轮廓确定瞳孔像素位置,包括:
    判断所述人眼轮廓是否为预定形状;
    在所述人眼轮廓为所述预定形状时,将所述人眼轮廓转化为点集合;
    利用最小二乘法对所述点集合进行拟合以确定所述瞳孔像素位置。
  9. 根据权利要求1所述的控制方法,其特征在于,所述在人眼进入所述零级区域时,触发所述红外发射器的保护机制,包括:
    在人眼进入所述零级区域时,降低所述红外发射器的电流、脉宽、或帧率中的至少一种;或者
    在人眼进入所述零级区域时,关闭所述红外发射器。
  10. 根据权利要求1所述的控制方法,其特征在于,所述在人眼进入所述零级区域时,触发所述红外发射器的保护机制,包括:
    在人眼进入所述零级区域时,提示用户。
  11. 一种电子设备,其特征在于,包括红外发射器、红外传感器、可见光传感器和处理器,所述处理器用于:
    获取零级区域在所述红外传感器上的原始像素位置;
    获取人眼区域在所述可见光传感器上的人眼像素位置;
    根据所述原始像素位置和所述人眼像素位置判断人眼是否进入所述零级区域;
    在人眼进入所述零级区域时,触发所述红外发射器的保护机制。
  12. 根据权利要求11所述的电子设备,其特征在于,所述处理器用于:
    将所述原始像素位置转换为所述零级区域在所述可见光传感器上的目标像素位置;
    将所述目标像素位置与所述人眼像素位置进行坐标比对以判断人眼是否进入所述零级区域。
  13. 根据权利要求12所述的电子设备,其特征在于,
    所述可见光传感器用于获取可见光图像;
    所述处理器用于在所述可见光图像中包括所述人眼区域时,获取所述人眼区域在所述可见光传感器上的所述人眼像素位置。
  14. 根据权利要求13所述的电子设备,其特征在于,所述电子设备包括人眼模型库,所述人眼模型库中存储有人眼模板图像,所述处理器用于:
    读取所述可见光图像和所述人眼模板图像;
    计算所述可见光图像与所述人眼模板图像的相关性以判断所述可见光图像中是否包括所述 人眼区域。
  15. 根据权利要求14所述的电子设备,其特征在于,所述处理器用于:
    分别获取所述可见光图像的灰度参数和所述人眼模板图像的灰度参数;
    根据所述人眼模板图像的灰度参数遍历所述可见光图像以获取所述可见光图像中的匹配区域,其中,所述匹配区域的灰度参数与所述人眼模板图像的灰度参数匹配;
    根据所述匹配区域判断所述可见光图像中是否包括所述人眼区域。
  16. 根据权利要求15所述的电子设备,其特征在于,所述处理器用于:
    在所述可见光图像中标示所述匹配区域;
    获取所述匹配区域在所述可见光传感器上的边缘像素位置;
    将所述目标像素位置与所述边缘像素位置进行坐标比对以判断人眼是否进入所述零级区域。
  17. 根据权利要求12所述的电子设备,其特征在于,人眼包括瞳孔,
    所述可见光传感器用于获取可见光图像;
    所述处理器用于:
    在所述可见光图像中包括人脸区域时,根据所述人脸区域确定所述人眼区域在所述可见光传感器上的初步像素位置;
    根据所述初步像素位置确定人眼轮廓;
    根据所述人眼轮廓确定瞳孔像素位置;
    将所述目标像素位置与所述瞳孔像素位置进行坐标比对以判断瞳孔是否进入所述零级区域;
    在瞳孔进入所述零级区域时,触发所述红外发射器的保护机制。
  18. 根据权利要求17所述的电子设备,其特征在于,所述处理器用于:
    利用分类器在所述初步像素位置中定位左眼区域和/或右眼区域;
    对所述人脸区域进行二值化处理;
    对所述左眼区域和/或所述右眼区域进行边缘检测以确定所述人眼轮廓;
    判断所述人眼轮廓是否为预定形状;
    在所述人眼轮廓为所述预定形状时,将所述人眼轮廓转化为点集合;
    利用最小二乘法对所述点集合进行拟合以确定所述瞳孔像素位置。
  19. 根据权利要求11所述的电子设备,其特征在于,所述处理器用于:
    在人眼进入所述零级区域时,降低所述红外发射器的电流、脉宽、或帧率中的至少一种;或者
    在人眼进入所述零级区域时,关闭所述红外发射器;和/或
    在人眼进入所述零级区域时,提示用户。
  20. 一种包含计算机可读指令的非易失性计算机可读存储介质,所述计算机可读指令被处理器执行时,使得所述处理器执行权利要求1-10任意一项所述的电子设备的控制方法。
PCT/CN2019/089615 2019-05-31 2019-05-31 电子设备的控制方法、电子设备和计算机可读存储介质 WO2020237658A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201980095503.2A CN113711229B (zh) 2019-05-31 2019-05-31 电子设备的控制方法、电子设备和计算机可读存储介质
PCT/CN2019/089615 WO2020237658A1 (zh) 2019-05-31 2019-05-31 电子设备的控制方法、电子设备和计算机可读存储介质
EP19931080.6A EP3975034A4 (en) 2019-05-31 2019-05-31 CONTROL METHOD FOR ELECTRONIC DEVICE, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
US17/532,448 US11836956B2 (en) 2019-05-31 2021-11-22 Control method for electronic device, electronic device and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2019/089615 WO2020237658A1 (zh) 2019-05-31 2019-05-31 电子设备的控制方法、电子设备和计算机可读存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/532,448 Continuation US11836956B2 (en) 2019-05-31 2021-11-22 Control method for electronic device, electronic device and computer readable storage medium

Publications (1)

Publication Number Publication Date
WO2020237658A1 true WO2020237658A1 (zh) 2020-12-03

Family

ID=73552667

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/089615 WO2020237658A1 (zh) 2019-05-31 2019-05-31 电子设备的控制方法、电子设备和计算机可读存储介质

Country Status (4)

Country Link
US (1) US11836956B2 (zh)
EP (1) EP3975034A4 (zh)
CN (1) CN113711229B (zh)
WO (1) WO2020237658A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113111762A (zh) * 2021-04-07 2021-07-13 瑞芯微电子股份有限公司 一种人脸识别方法、检测方法、介质及电子设备
CN113268137A (zh) * 2021-02-03 2021-08-17 深圳赋能软件有限公司 人眼保护装置及方法、身份识别装置及电子设备
CN114501737A (zh) * 2022-02-23 2022-05-13 北京太格时代自动化系统设备有限公司 一种铁路隧道照明系统及方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104573667A (zh) * 2015-01-23 2015-04-29 北京中科虹霸科技有限公司 一种提高移动终端的虹膜图像质量的虹膜识别装置
US9632312B1 (en) * 2013-04-30 2017-04-25 Google Inc. Optical combiner with curved diffractive optical element
CN108600740A (zh) * 2018-04-28 2018-09-28 Oppo广东移动通信有限公司 光学元件检测方法、装置、电子设备和存储介质
CN109753925A (zh) * 2018-12-29 2019-05-14 深圳三人行在线科技有限公司 一种虹膜特征提取的方法和设备

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4455419B2 (ja) * 2005-06-15 2010-04-21 オリンパスメディカルシステムズ株式会社 手術用立体画像観察装置
EP2460357A1 (en) * 2009-07-31 2012-06-06 Lemoptix SA Optical micro-projection system and projection method
WO2015113479A1 (zh) * 2014-01-28 2015-08-06 北京中科虹霸科技有限公司 一种具有人机交互机制的移动终端虹膜识别装置和方法
US20170061210A1 (en) * 2015-08-26 2017-03-02 Intel Corporation Infrared lamp control for use with iris recognition authentication
KR102525126B1 (ko) * 2016-07-29 2023-04-25 삼성전자주식회사 홍채 카메라를 포함하는 전자 장치
CN107463880A (zh) * 2017-07-07 2017-12-12 广东欧珀移动通信有限公司 控制方法、电子装置和计算机可读存储介质
EP3567851A4 (en) * 2018-03-12 2020-07-29 Guangdong Oppo Mobile Telecommunications Corp., Ltd. PROJECTOR, DETECTION METHOD AND DEVICE THEREFOR, IMAGE DETECTING DEVICE, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
CN108716983B (zh) * 2018-04-28 2019-07-23 Oppo广东移动通信有限公司 光学元件检测方法和装置、电子设备、存储介质
CN108716982B (zh) * 2018-04-28 2020-01-10 Oppo广东移动通信有限公司 光学元件检测方法、装置、电子设备和存储介质
US20190306441A1 (en) * 2018-04-03 2019-10-03 Mediatek Inc. Method And Apparatus Of Adaptive Infrared Projection Control
WO2019223002A1 (zh) * 2018-05-25 2019-11-28 深圳阜时科技有限公司 一种结构光检测装置及检测方法、身份识别装置及电子设备
CN109194869A (zh) * 2018-10-09 2019-01-11 Oppo广东移动通信有限公司 控制方法、控制装置、深度相机和电子装置
CN109784246A (zh) * 2018-12-29 2019-05-21 深圳三人行在线科技有限公司 一种信息关联的方法和设备
CN117529886A (zh) * 2021-08-02 2024-02-06 Oppo广东移动通信有限公司 无线通信的方法、终端和网络设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9632312B1 (en) * 2013-04-30 2017-04-25 Google Inc. Optical combiner with curved diffractive optical element
CN104573667A (zh) * 2015-01-23 2015-04-29 北京中科虹霸科技有限公司 一种提高移动终端的虹膜图像质量的虹膜识别装置
CN108600740A (zh) * 2018-04-28 2018-09-28 Oppo广东移动通信有限公司 光学元件检测方法、装置、电子设备和存储介质
CN109753925A (zh) * 2018-12-29 2019-05-14 深圳三人行在线科技有限公司 一种虹膜特征提取的方法和设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3975034A4 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113268137A (zh) * 2021-02-03 2021-08-17 深圳赋能软件有限公司 人眼保护装置及方法、身份识别装置及电子设备
CN113111762A (zh) * 2021-04-07 2021-07-13 瑞芯微电子股份有限公司 一种人脸识别方法、检测方法、介质及电子设备
CN113111762B (zh) * 2021-04-07 2024-04-05 瑞芯微电子股份有限公司 一种人脸识别方法、检测方法、介质及电子设备
CN114501737A (zh) * 2022-02-23 2022-05-13 北京太格时代自动化系统设备有限公司 一种铁路隧道照明系统及方法
CN114501737B (zh) * 2022-02-23 2024-04-19 北京太格时代电气股份有限公司 一种铁路隧道照明系统及方法

Also Published As

Publication number Publication date
CN113711229A (zh) 2021-11-26
CN113711229B (zh) 2024-03-12
US11836956B2 (en) 2023-12-05
US20220148214A1 (en) 2022-05-12
EP3975034A4 (en) 2022-06-15
EP3975034A1 (en) 2022-03-30

Similar Documents

Publication Publication Date Title
US11836956B2 (en) Control method for electronic device, electronic device and computer readable storage medium
EP3637367B1 (en) Method and apparatus for controlling structured light projector, and electronic device
US11503228B2 (en) Image processing method, image processing apparatus and computer readable storage medium
ES2957329T3 (es) Sistemas y métodos para el seguimiento ocular en aplicaciones de realidad virtual y de realidad aumentada
EP2824923B1 (en) Apparatus, system and method for projecting images onto predefined portions of objects
US11335028B2 (en) Control method based on facial image, related control device, terminal and computer device
US20200082160A1 (en) Face recognition module with artificial intelligence models
CN111083453B (zh) 一种投影装置、方法及计算机可读存储介质
US11143879B2 (en) Semi-dense depth estimation from a dynamic vision sensor (DVS) stereo pair and a pulsed speckle pattern projector
TW202023261A (zh) 控制方法、微處理器、電腦可讀記錄媒體及電腦設備
JP2017102768A (ja) 情報処理装置、表示装置、情報処理方法、及び、プログラム
US9049369B2 (en) Apparatus, system and method for projecting images onto predefined portions of objects
JP6970376B2 (ja) 画像処理システム、及び画像処理方法
CN104658462B (zh) 投影机以及投影机的控制方法
US20200241697A1 (en) Position detecting method, position detecting device, and interactive projector
TWI712005B (zh) 多頻譜高精確辨識物體的方法
US20220206159A1 (en) Processing apparatus, electronic apparatus, processing method, and program
TWI535288B (zh) 深度攝影機系統
US11283970B2 (en) Image processing method, image processing apparatus, electronic device, and computer readable storage medium
JP2005148813A (ja) 3次元形状検出装置、撮像装置、及び、3次元形状検出プログラム
JP2005148813A5 (zh)
KR20170002545A (ko) 이미지 촬영의 처리 방법 및 장치
WO2020237657A1 (zh) 电子设备的控制方法、电子设备和计算机可读存储介质
CN105204609B (zh) 深度摄影机系统
KR20220096115A (ko) 전자 장치 및 그의 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19931080

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2019931080

Country of ref document: EP

Effective date: 20211222