WO2020073572A1 - 控制方法、控制装置、深度相机和电子装置 - Google Patents

控制方法、控制装置、深度相机和电子装置 Download PDF

Info

Publication number
WO2020073572A1
WO2020073572A1 PCT/CN2019/075380 CN2019075380W WO2020073572A1 WO 2020073572 A1 WO2020073572 A1 WO 2020073572A1 CN 2019075380 W CN2019075380 W CN 2019075380W WO 2020073572 A1 WO2020073572 A1 WO 2020073572A1
Authority
WO
WIPO (PCT)
Prior art keywords
visible light
image
depth
distance
detection area
Prior art date
Application number
PCT/CN2019/075380
Other languages
English (en)
French (fr)
Inventor
李小朋
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Publication of WO2020073572A1 publication Critical patent/WO2020073572A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/56Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present application relates to the technical field of three-dimensional imaging, in particular to a control method, control device, depth camera and electronic device.
  • the structured light projector can project laser light with predetermined pattern information, and project the laser light on the target user located in the space, and then obtain the laser pattern reflected by the target user through the image collector to further obtain the depth image of the target user.
  • the laser light emitted by the structured light projector is usually an infrared laser. When the energy of the infrared laser is too high, it is easy to cause damage to the eyes of the user.
  • the embodiments of the present application provide a control method, a control device, a depth camera, and an electronic device.
  • the control method of the structured light projector includes: acquiring a depth image and an initial visible light image of the scene; determining whether a human face exists in the initial visible light image; and when the human face exists in the initial visible light image, Calculating the distance between the human face and the structured light projector according to the initial visible light image and the depth image; adjusting the luminous power of the structured light projector according to the distance.
  • the control device of the embodiment of the present application includes an acquisition module, a judgment module, a calculation module, and an adjustment module.
  • the acquisition module is used to acquire a depth image and an initial visible light image of the scene.
  • the judgment module is used to judge whether there is a human face in the initial visible light image.
  • the calculation module is configured to calculate the distance between the human face and the structured light projector according to the initial visible light image and the depth image when the human face exists in the initial visible light image.
  • the adjustment module is used to adjust the luminous power of the structured light projector according to the distance.
  • the depth camera includes a structured light projector and a processor.
  • the processor is used to: acquire a depth image and an initial visible light image of the scene; determine whether a human face exists in the initial visible light image; and in the initial visible light image When there is a human face in it, calculate the distance between the human face and the structured light projector according to the initial visible light image and the depth image; adjust the luminous power of the structured light projector according to the distance .
  • the electronic device includes a housing and a depth camera.
  • the depth camera is provided on the housing.
  • the depth camera includes a structured light projector and a processor.
  • the processor is used to: acquire a depth image and an initial visible light image of the scene; determine whether there is a human face in the initial visible light image; and exist in the initial visible light image.
  • FIG. 1 is a schematic flowchart of a control method in some embodiments of the present application.
  • FIG. 2 is a schematic block diagram of a control device according to some embodiments of the present application.
  • 3 and 4 are schematic structural diagrams of electronic devices according to some embodiments of the present application.
  • FIG. 5 is a schematic flowchart of a control method according to some embodiments of the present application.
  • FIG. 6 is a schematic block diagram of an acquisition module of a control device according to some embodiments of the present application.
  • FIG. 7 is a schematic flowchart of a control method according to some embodiments of the present application.
  • FIG. 8 is a schematic flowchart of a control method according to some embodiments of the present application.
  • FIG. 9 is a schematic block diagram of a calculation module of a control device according to some embodiments of the present application.
  • FIG. 10 is a schematic block diagram of a second acquisition unit of a control device according to some embodiments of the present application.
  • FIG. 11 is a schematic flowchart of a control method according to some embodiments of the present application.
  • FIG. 12 is a schematic block diagram of a control device according to some embodiments of the present application.
  • FIG. 13 is a schematic diagram of a light source partition of a structured light projector according to some embodiments of the present application.
  • Control methods include:
  • the present application also provides a control device 10 for a structured light projector 100.
  • the control method of the embodiment of the present application may be implemented by the control device 10 of the embodiment of the present application.
  • the control device 10 includes an acquisition module 11, a judgment module 12, a calculation module 13 and an adjustment module 14.
  • Step 01 may be implemented by the obtaining module 11.
  • Step 02 can be implemented by the judgment module 12.
  • Step 03 can be implemented by the calculation module 13.
  • Step 04 may be implemented by the adjustment module 14.
  • the acquisition module 11 can be used to acquire the depth image and the initial visible light image of the scene.
  • the judgment module 12 may be used to judge whether there is a human face in the initial visible light image.
  • the calculation module 13 may be used to calculate the distance between the human face and the structured light projector 100 according to the initial visible light image and the depth image when there is a human face in the initial visible light image.
  • the adjustment module 14 can be used to adjust the luminous power of the structured light projector 100 according to the distance.
  • the present application also provides a depth camera 400.
  • the depth camera 400 includes a structured light projector 100, a structured light camera 200, and a processor 300.
  • Step 01, step 02, step 03, and step 04 can all be implemented by the processor 300. That is to say, the processor 300 can be used to acquire the depth image and the initial visible light image of the scene, determine whether there is a human face in the initial visible light image, and calculate the human face and the human face according to the initial visible light image and depth image when there is a human face in the initial visible light image
  • the distance between the structured light projectors 100 and the luminous power of the structured light projectors 100 are adjusted according to the distance.
  • the electronic device 800 includes a housing 801 and the depth camera 400 described above.
  • the depth camera 400 is provided on the housing 801.
  • the processor 300 in the depth camera 400 may be a processor 300 integrated separately in the depth camera 400; or, the depth camera 400 may also share a processor with the electronic device 800, in which case the processor is independent of the depth camera 400 and Integrated in the electronic device 800.
  • the depth camera 400 and the electronic device 800 share a processor.
  • the electronic device 800 may be a mobile phone, a tablet computer, a game machine, a smart watch, a smart bracelet, a head-mounted display device, a drone, or the like.
  • the embodiments of the present application are described by taking the electronic device 800 as a mobile phone as an example. It can be understood that the specific form of the electronic device 800 is not limited to a mobile phone.
  • the housing 801 can serve as a mounting carrier for functional elements of the electronic device 800.
  • the housing 801 may provide protection against dust, drop, water, etc. for the functional element.
  • the functional element may be the display screen 700, the visible light camera 500, the receiver, and the like.
  • the housing 801 includes a main body 803 and a movable bracket 802.
  • the movable bracket 802 can move relative to the main body 803 under the driving of a driving device, for example, the movable bracket 802 can slide relative to the main body 803 to slide Enter the main body 803 (shown in Figure 4) or slide out of the main body 803 (shown in Figure 3).
  • Part of the functional elements can be installed on the main body 803, and another part of the functional elements (such as the depth camera 400, the visible light camera 500, the receiver) can be installed on the movable bracket 802, the movable bracket 802 movement can drive A part of the functional elements is retracted into or extends from the main body 803.
  • the movable bracket 802 movement can drive A part of the functional elements is retracted into or extends from the main body 803.
  • the depth camera 400 is mounted on the housing 801. Specifically, an acquisition window may be opened on the housing 801, and the depth camera 400 is aligned with the acquisition window to enable the depth camera 400 to acquire depth information.
  • the depth camera 400 is installed on the movable bracket 802.
  • the user can trigger the movable bracket 802 to slide out from the body 803 to drive the depth camera 400 to extend from the body 803; when the depth camera 400 is not needed, the user can trigger the movable bracket 802 to slide in The main body 803 retracts the main body 803 to drive the depth camera 400.
  • the depth image may be collected by the depth camera 400, and the initial visible light image may be collected by the visible light camera 500 (such as an RGB camera, etc.).
  • the depth image indicates the depth information of each object in the scene
  • the visible light image indicates the color information of each object in the scene.
  • the processor 300 controls the depth camera 400 to collect the depth image and the visible light camera 500 to collect the initial visible light image, it further recognizes whether there is a human face in the initial visible light image according to the face recognition algorithm.
  • the processor 300 may use the Haar feature or the LBP feature to identify whether there is a human face in the initial visible light image.
  • the Haar feature is a rectangular feature that includes a white area and a black area, and can reflect the grayscale change of the image.
  • the Haar feature includes multiple rectangular feature templates. By changing the size and position of the rectangular feature template, a large number of rectangular features can be exhausted in the sub-window of the image.
  • a frame of initial visible light image is first divided into multiple sub-windows, and each sub-window is searched for a suitable rectangular feature for description to use rectangular features to represent the feature information of each sub-window.
  • the number of rectangular features includes multiple.
  • a pre-trained face classifier is used to detect and determine each sub-window according to the rectangular feature corresponding to each sub-window to determine whether the sub-window is in the face area part. Finally, classify the sub-windows that belong to the part of the face area that is determined to obtain the face area in the initial visible light image. In this way, if each sub-window in the initial visible light image is classified as a part of the non-face area after being classified by the trained face classifier, the face area cannot be obtained in the initial visible light image, thus indicating the initial visible light There is no face in the image.
  • the LBP (Local Binary Pattern) feature is an operator used to describe the local texture features of an image. It has significant advantages of rotation invariance and gray invariance.
  • the LBP feature is defined as a window with an adjustable size, using the center pixel of the window as the threshold, and comparing the gray values of adjacent pixels with the gray value of the center pixel, if the gray value of the surrounding pixels is greater than the center For the gray value of a pixel, the position of the corresponding pixel is marked as 1, otherwise it is marked as 0. In this way, the LBP value of the center pixel of each window can be used to reflect the texture information of the area where the window is located. In the face recognition process, first calculate the LBP value of each pixel in the initial visible light image.
  • Each pixel corresponds to a window when it is the center pixel. Therefore, after calculating the LBP value of each pixel, a statistical histogram of the LBP value corresponding to each window is calculated, and then the statistical histograms of multiple windows are connected It becomes a feature vector to obtain the LBP texture feature vector of the initial visible light image. Finally, a support vector machine (Support Vector Machines, SVM) can be used to make a judgment based on the LBP texture feature vector of the initial visible light image to determine whether there is a human face in the initial visible light image.
  • SVM Support Vector Machines
  • the processor 300 calculates the distance between the person and the structured light projector 100 according to the initial visible light image and the depth image, and according to The distance adjusts the luminous power of the structured light projector 100 so as to avoid the problem that the luminous power of the structured light projector 100 is too high to cause damage to the user's eyes.
  • the processor 300 turns off the depth camera 400 to reduce the power consumption of the electronic device 800.
  • the control method of the embodiment of the present application can adjust the luminous power of the depth camera 400 according to the presence or absence of a human face, and adjust the structure of the structured light projector 100 according to the distance between the face and the structured light projector 100 when there is a face
  • the luminous power thereby avoiding the problem that the luminous power of the structured light projector 100 is too high to cause damage to the eyes of the user, and improves the safety of use of the electronic device 800.
  • obtaining the depth image and the initial visible light image of the scene in step 01 includes:
  • 013 Process the laser pattern to obtain a depth image.
  • the acquisition module 11 includes a first control unit 111, a first acquisition unit 112, and a processing unit 113.
  • Step 011 may be implemented by the first control unit 111.
  • Step 012 may be implemented by the first acquiring unit 112.
  • Step 013 may be implemented by the processing unit 113. That is to say, the first control unit 111 can be used to control the structured light projector 100 to project the laser pattern onto the scene with the initial light emitting power.
  • the first acquiring unit 112 may be used to acquire the laser pattern modulated by the scene.
  • the processing unit 113 may be used to process the laser pattern to obtain a depth image.
  • step 011, step 012, and step 013 may be implemented by the processor 300. That is to say, the processor 300 can also be used to control the structured light projector 100 to project a laser pattern to the scene with the initial luminous power, acquire the laser pattern modulated by the scene, and process the laser pattern to obtain a depth image.
  • the initial luminous power may be obtained through calibration in multiple experiments in the previous period, and stored in the memory 600 of the electronic device 800.
  • the structured light projector 100 emits structured light at the initial luminous power, no matter how close the distance between the user and the structured light projector 100 is, it can ensure that the energy of the laser will not harm the user's eyes.
  • a laser pattern with multiple spots can be projected onto the scene. Since the distance between each object in the scene and the structured light projector 100 is different, the laser pattern projected on the object will be modulated due to the height of the object surface, and the speckles in the laser pattern will be shifted.
  • the structured light camera 200 collects the laser pattern after speckle shift, that is, the laser pattern modulated by the object.
  • the reference pattern is stored in the memory 600, and the processor 300 can calculate the depth data of multiple pixels according to the offset of the spot in the laser pattern relative to the spot in the reference pattern, and multiple pixels with depth data can form a frame Depth image.
  • the control method of the embodiment of the present application can measure the depth data of the object in the scene by the depth camera 400 to form a depth image.
  • the structured light projector 100 projects the laser pattern with a relatively low initial luminous power, and therefore, it does not cause harm to the eyes of the user, and the user has high safety in using the electronic device 800.
  • the initial visible light image has a first resolution and the depth image has a third resolution.
  • Step 03 When there is a human face in the initial visible light image, calculating the distance between the human face and the structured light projector 100 according to the initial visible light image and the depth image includes:
  • step 033 further includes:
  • the calculation module 13 includes a conversion unit 131, an identification unit 132, a second acquisition unit 133, and a selection unit 134.
  • the second acquisition unit 133 includes a calculation subunit 1331, a determination subunit 1332, a first acquisition subunit 1333, and a second acquisition subunit 1334.
  • Step 031 may be implemented by the conversion unit 131.
  • Step 032 can be implemented by the recognition unit 132.
  • Step 033 may be implemented by the second obtaining unit 133.
  • Step 034 can be implemented by the selection unit 134.
  • Step 0331 can be implemented by the calculation subunit 1331.
  • Step 0332 can be implemented by the determination subunit 1332.
  • Step 0333 may be implemented by the first obtaining subunit 1333.
  • Step 0334 may be implemented by the second obtaining subunit 1334.
  • the conversion unit 131 can be used to convert the initial visible light image into an intermediate visible light image with a second resolution, the first resolution being greater than the second resolution.
  • the recognition unit 132 may be used to recognize the visible light face detection area in the intermediate visible light image.
  • the second acquisition unit 133 may be used to acquire a depth face detection area corresponding to the visible light face detection area in the depth image according to the mapping relationship between the intermediate visible light image and the depth image.
  • the selecting unit 134 may be used to select the depth data with the smallest value in the depth face detection area as the distance between the face and the structured light projector 100.
  • the calculation subunit 1331 may be used to convert the initial visible light image into an intermediate visible light image with a second resolution, the first resolution being greater than the second resolution.
  • the determining sub-unit 1332 can be used to determine the second origin pixel in the depth image according to the coordinate value of the first origin pixel in the visible light face detection area and the mapping ratio.
  • the first obtaining subunit 1333 may be used to obtain the second width and the second height of the deep face detection area according to the first width and the first height of the visible face detection area and the mapping ratio.
  • the second acquisition subunit 1334 may be used to acquire the depth face detection area according to the second origin pixel, the second width, and the second height.
  • step 031, step 032, step 033, step 0331, step 0332, step 0333, step 03334, and step 034 may all be implemented by the processor 300. That is to say, the processor 300 can be used to convert the initial visible light image into an intermediate visible light image with a second resolution, identify the visible light face detection area in the intermediate visible light image, according to the mapping relationship between the intermediate visible light image and the depth image The depth face detection area corresponding to the visible light face detection area is obtained in the depth image, and the depth data with the smallest value in the depth face detection area is selected as the distance between the face and the structured light projector 100.
  • the processor 300 specifically calculates the ratio of the third resolution to the second resolution to obtain the mapping ratio, and determines the depth image according to the coordinate value of the first origin pixel in the visible light face detection area and the mapping ratio.
  • the second origin pixel, the second width and the second height of the depth face detection area according to the first width and first height of the visible light face detection area and the mapping ratio, and the second origin pixel, the second width and the second height Highly obtain the operation of deep face detection area.
  • the first resolution refers to the initial resolution after the initial visible light image is taken.
  • the first resolution of the initial visible light image is higher.
  • the third resolution of the depth image is usually smaller than the first resolution of the initial visible light image.
  • the processor 300 first needs to compress and convert the initial visible light image of the first resolution to obtain intermediate visible light with the second resolution
  • the image for example, compresses the initial visible light image of the first resolution into an intermediate visible light image of the second resolution of 640 ⁇ 480.
  • the processor 300 recognizes the visible light face detection area in the intermediate visible light image.
  • the face area recognized by the processor 300 refers to the area including the forehead, eyebrows, eyes, cheeks, ears, nose, mouth, chin, etc.
  • the face area needs to be Crop to get the visible light face detection area.
  • the processor 300 further recognizes the eyebrows and chin in the face area, and crops the visible light face detection area with the eyebrows and chin as the boundaries.
  • the visible light face detection area may be rectangular, square, round, etc., which is not limited herein. In a specific embodiment of the present application, the visible light face detection area is rectangular.
  • the processor 300 takes the pixel in the upper left corner of the visible light face detection area as the first origin pixel, and the coordinates of the first origin pixel are marked as (left, top), and the value of (left, top) corresponds to the pixel in the initial visible light image The value of the pixel coordinate of the first origin pixel in the coordinate system.
  • the processor 300 finds the pixel coordinate values (right, bottom) of the pixels in the lower right corner of the visible light face detection area, and calculates the first width of the visible light face detection area according to the coordinates (left, top) and coordinates (right, bottom) width and first height height. Finally, the processor 300 uses (left, top, width, height) to define the position and size of the visible light face detection area in the intermediate visible light image.
  • the processor 300 may find a depth face detection area corresponding to the visible light face detection area in the depth image according to the visible light face detection area.
  • the processor 300 may use (left ', top', width ', height') to determine the position and size of the depth face detection area in the depth image.
  • the processor 300 selects the depth data with the smallest value from the depth data in the depth face detection area as the distance between the face and the structured light projector 100.
  • the smallest depth data indicates the distance between the point closest to the structured light projector 100 in the face and the structured light projector 100.
  • Adjusting the luminous power of the structured light projector 100 based on the minimum value can further ensure that the energy of the projected laser will not be too high, and improve the user's eye safety.
  • the control method of the embodiment of the present application can reduce the amount of data that the processor 300 needs to process through resolution compression on the one hand, and can be used in the depth image according to the ratio relationship between the second resolution and the third resolution Determine the depth face detection area corresponding to the visible light face detection area to obtain the depth data of the face. In this way, the distance between the face and the structured light projector 100 can be obtained to facilitate the subsequent structured light projection Adjustment of the luminous power of the device 100.
  • adjusting the luminous power of the structured light projector 100 according to the distance in step 04 includes:
  • the adjustment module 14 includes a third acquisition unit 141, a second control unit 142, and a third control unit 143.
  • Step 041 may be implemented by the third obtaining unit 141.
  • Step 042 may be implemented by the second control unit 142.
  • Step 043 may be implemented by the third control unit 143. That is to say, the third acquiring unit 141 can be used to acquire the luminous power of the structured light projector 100 according to the distance when the distance is less than the first preset distance and greater than the second preset distance, and the second preset distance is less than the first preset Set the distance.
  • the second control unit 142 may be used to control the structured light projector 100 to emit light according to the emission power.
  • the third control unit 143 may be used to control the structured light projector 100 to close when the distance is greater than the first preset distance or less than the second preset distance.
  • step 041, step 042, and step 043 may be implemented by the processor 300. That is to say, the processor 300 can also be used to acquire the luminous power of the structured light projector 100 according to the distance when the distance is less than the first preset distance and greater than the second preset distance, control the structured light projector 100 to emit light according to the luminous power, And when the distance is greater than the first preset distance or less than the second preset distance, the structured light projector 100 is controlled to be turned off. Wherein, the second preset distance is smaller than the first preset distance.
  • the processor 300 can directly turn off the structured light projector 100 to reduce the damage of the laser to human eyes.
  • the depth camera 400 usually cannot obtain the depth data of the user's complete face. In this case, the structured light projector 100 may be turned off directly. The power consumption of the electronic device 800 is reduced.
  • the processor 300 Adjust the driving current of the structured light projector 100 according to the distance between the human face and the structured light projector 100 to make the structured light projector 100 emit light according to the target luminous power, thereby satisfying the accuracy requirements for acquiring depth data and ensuring the user Eye safety.
  • the luminous power also increases; when the distance decreases, the luminous power also decreases accordingly.
  • the corresponding relationship between the distance and the luminous power can be: (1) the distance is a value, the luminous power is also a value, and the distance corresponds to the luminous power; (2) the distance is a range, the luminous power is a value, the distance One-to-one correspondence with luminous power.
  • the corresponding relationship between the distance and the luminous power can be obtained through a lot of experimental calibration in the early stage, and the corresponding relationship between the calibration is stored in the memory 600.
  • the processor 300 finds the luminous power corresponding to the current distance from the correspondence stored in the memory 600 based on the calculated distance, and controls the structured light projector 100 to emit light according to the luminous power.
  • the depth camera 400 is a front camera, if the distance between the face and the structured light projector 100 is greater than the first preset distance, it is considered that the user is not using the structured light projector 100 at this time, and the processor 300 may be directly turned off
  • the light projector 100 is structured to reduce the power consumption of the electronic device 800.
  • the light source 101 of the structured light projector 100 includes a plurality of light-emitting regions 102, and each light-emitting region 102 can be independently controlled.
  • the processor 300 adjusts the driving current of the structured light projector 100 to make the structured light projector 100 according to the target.
  • the light source 101 can also be controlled to turn on a different number of luminous regions 102 so that the structured light projector 100 emits light according to the target luminous power.
  • the two light emitting regions 102 of the light source 101 are turned on; when the distance is 20 cm, the four light emitting regions 102 of the light source 101 are turned on, and so on.
  • the shape of the light emitting region 102 may be a fan shape (as shown in FIG. 13), a ring shape, a zigzag shape, etc., which is not limited herein.
  • the part of the light-emitting area 102 that is turned on is symmetrically distributed around the center point of the light source 101 (as shown in FIG. 13), thereby improving the uniformity of the brightness of the laser light emitted by the structured light projector 100 To improve the accuracy of acquiring depth data.
  • first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated.
  • the features defined with “first” and “second” may include at least one of the features either explicitly or implicitly.
  • the meaning of "plurality” is at least two, such as two, three, etc., unless otherwise specifically limited.
  • Any process or method description in a flowchart or otherwise described herein may be understood as representing a module, segment, or portion of code that includes one or more executable instructions for implementing specific logical functions or steps of a process , And the scope of the preferred embodiments of the present application includes additional implementations, in which the functions shown may not be in the order shown or discussed, including performing functions in a substantially simultaneous manner or in reverse order according to the functions involved, which shall It is understood by those skilled in the art to which the embodiments of the present application belong.
  • a "computer-readable medium” may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device.
  • computer-readable media include the following: electrical connections (electronic devices) with one or more wires, portable computer cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable and editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer-readable medium may even be paper or other suitable medium on which the program can be printed, because, for example, by optically scanning the paper or other medium, followed by editing, interpretation, or other appropriate if necessary Process to obtain the program electronically and then store it in computer memory.
  • each part of the present application may be implemented by hardware, software, firmware, or a combination thereof.
  • multiple steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system.
  • a logic gate circuit for implementing a logic function on a data signal
  • PGA programmable gate arrays
  • FPGA field programmable gate arrays
  • a person of ordinary skill in the art can understand that all or part of the steps carried in the method of the above embodiments can be completed by instructing relevant hardware through a program.
  • the program can be stored in a computer-readable storage medium. When executed, it includes one of the steps of the method embodiment or a combination thereof.
  • each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or software function modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it may also be stored in a computer-readable storage medium.
  • the storage medium mentioned above may be a read-only memory, a magnetic disk or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Optics & Photonics (AREA)
  • Electromagnetism (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Controls And Circuits For Display Device (AREA)
  • Studio Devices (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种控制方法、控制装置(10)、深度相机(400)和电子装置(800)。控制方法包括:获取场景的深度图像和初始可见光图像;判断初始可见光图像中是否存在人脸;在初始可见光图像中存在人脸时,根据初始可见光图像和深度图像计算人脸与结构光投射器之间的距离;根据距离调整结构光投射器的发光功率。

Description

控制方法、控制装置、深度相机和电子装置
优先权信息
本申请请求2018年10月9日向中国国家知识产权局提交的、专利申请号为201811180890.3的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本申请涉及三维成像技术领域,特别涉及一种控制方法、控制装置、深度相机和电子装置。
背景技术
结构光投射器可投射带有预定图案信息的激光,并将激光投射到位于空间中的目标用户上,再通过图像采集器获取由目标用户反射的激光图案,以进一步获得目标用户的深度图像。然而,结构光投射器发射的激光通常为红外激光,红外激光的能量过高时容易对用户的眼睛进行造成伤害。
发明内容
本申请的实施例提供了一种控制方法、控制装置、深度相机和电子装置。
本申请实施方式的结构光投射器的控制方法包括:获取场景的深度图像和初始可见光图像;判断所述初始可见光图像中是否存在人脸;在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;根据所述距离调整所述结构光投射器的发光功率。
本申请实施方式的控制装置包括获取模块、判断模块、计算模块和调整模块。所述获取模块用于获取场景的深度图像和初始可见光图像。所述判断模块用于判断所述初始可见光图像中是否存在人脸。所述计算模块用于在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离。所述调整模块用于根据所述距离调整所述结构光投射器的发光功率。
本申请实施方式的深度相机包括结构光投射器和处理器,所述处理器用于:获取场景的深度图像和初始可见光图像;判断所述初始可见光图像中是否存在人脸;在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;根据所述距离调整所述结构光投射器的发光功率。
本申请实施方式的电子装置包括壳体和深度相机。所述深度相机设置在所述壳体上。所述深度相机包括结构光投射器和处理器,所述处理器用于:获取场景的深度图像和初始可见光图像;判断所述初始可见光图像中是否存在人脸;在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;根据所述距离调整所述结构光投射器的发光功率。
本申请的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式的控制方法的流程示意图。
图2是本申请某些实施方式的控制装置的模块示意图。
图3和图4是本申请某些实施方式的电子装置的结构示意图。
图5是本申请某些实施方式的控制方法的流程示意图。
图6是本申请某些实施方式的控制装置的获取模块的模块示意图。
图7是本申请某些实施方式的控制方法的流程示意图。
图8是本申请某些实施方式的控制方法的流程示意图。
图9是本申请某些实施方式的控制装置的计算模块的模块示意图。
图10是本申请某些实施方式的控制装置的第二获取单元的模块示意图。
图11是本申请某些实施方式的控制方法的流程示意图。
图12是本申请某些实施方式的控制装置的模块示意图。
图13是本申请某些实施方式的结构光投射器的光源分区示意图。
具体实施方式
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。
请一并参阅图1和图3,本申请提供一种结构光投射器100的控制方法。控制方法包括:
01:获取场景的深度图像和初始可见光图像;
02:判断初始可见光图像中是否存在人脸;
03:在初始可见光图像中存在人脸时,根据初始可见光图像和深度图像计算人脸与结构光投射器100之间的距离;和
04:根据距离调整结构光投射器100的发光功率。
请一并参阅图2和图3,本申请还提供一种结构光投射器100的控制装置10。本申请实施方式的控制方法可以由本申请实施方式的控制装置10实现。控制装置10包括获取模块11、判断模块12、计算模块13和调整模块14。步骤01可以由获取模块11实现。步骤02可以由判断模块12实现。步骤03可以由计算模块13实现。步骤04可以由调整模块14实现。
也即是说,获取模块11可用于获取场景的深度图像和初始可见光图像。判断模块12可用于判断初始可见光图像中是否存在人脸。计算模块13可用于在初始可见光图像中存在人脸时,根据初始可见光图像和深度图像计算人脸与结构光投射器100之间的距离。调整模块14可用于根据距离调整结构光投射器100的发光功率。
请参阅图3,本申请还提供一种深度相机400。深度相机400包括结构光投射器100、结构光摄像头200和处理器300。步骤01、步骤02、步骤03和步骤04均可以由处理器300实现。也即是说,处理器300可用于获取场景的深度图像和初始可见光图像、判断初始可 见光图像中是否存在人脸、在初始可见光图像中存在人脸时根据初始可见光图像和深度图像计算人脸与结构光投射器100之间的距离、根据距离调整结构光投射器100的发光功率。
请一并参阅图3和图4,本申请还提供一种电子装置800。电子装置800包括壳体801和上述的深度相机400。深度相机400设置在壳体801上。深度相机400中的处理器300可以是单独集成在深度相机400中的处理器300;或者,深度相机400也可以与电子装置800共用一个处理器,此时处理器独立于深度相机400之外并集成在电子装置800中。在本申请的具体实施例中,深度相机400与电子装置800共用一个处理器。其中,电子装置800可以是手机、平板电脑、游戏机、智能手表、智能手环、头显设备、无人机等。本申请实施方式以电子装置800为手机为例进行说明,可以理解,电子装置800的具体形式不限于手机。
壳体801可以作为电子装置800的功能元件的安装载体。壳体801可以为功能元件提供防尘、防摔、防水等保护,功能元件可以是显示屏700、可见光摄像头500、受话器等。在本申请实施例中,壳体801包括主体803及可动支架802,可动支架802在驱动装置的驱动下可以相对于主体803运动,例如可动支架802可以相对于主体803滑动,以滑入主体803(如图4所示)或从主体803滑出(如图3所示)。部分功能元件(例如显示屏700)可以安装在主体803上,另一部分功能元件(例如深度相机400、可见光摄像头500、受话器)可以安装在可动支架802上,可动支架802运动可带动该另一部分功能元件缩回主体803内或从主体803中伸出。当然,图3和图4所示仅仅是对壳体801的一种具体形式举例,不能理解为对本申请的壳体801的限制。
深度相机400安装在壳体801上。具体地,壳体801上可以开设有采集窗口,深度相机400与采集窗口对准安装以使深度相机400采集深度信息。在本申请的具体实施例中,深度相机400安装在可动支架802上。用户在需要使用深度相机400时,可以触发可动支架802从主体803中滑出以带动深度相机400从主体803中伸出;在不需要使用深度相机400时,可以触发可动支架802滑入主体803以带动深度相机400缩回主体803中。
本申请实施方式的结构光投射器100的控制方法中,深度图像可以由深度相机400采集,初始可见光图像可以由可见光摄像头500(如RGB摄像头等)采集。深度图像指示场景中各物体的深度信息,可见光图像指示场景中各物体的色彩信息。
处理器300控制深度相机400采集深度图像以及控制可见光摄像头500采集初始可见光图像后,进一步根据人脸识别算法对初始可见光图像中是否存在人脸进行识别。例如,处理器300可以利用Haar特征或LBP特征识别初始可见光图像中是否存在人脸。
具体地,Haar特征是一种包含白色区域和黑色区域的矩形特征,可以反映图像的灰度变化情况。Haar特征包括多个矩形特征模板,通过改变矩形特征模板的大小和位置可在图像的子窗口中穷举出大量的矩形特征。在人脸检测过程中,首先将一帧初始可见光图像划分出多个子窗口,每个子窗口寻找合适的矩形特征进行描述从而用矩形特征来表示各个子窗口的特征信息,其中,用于描述子窗口的矩形特征的数量包括多个。利用多个矩形特征对各个子窗口进行特征信息的描述后,用事先训练好的人脸分类器根据每个子窗口对应的矩形特征对每个子窗口进行检测判决以判断该子窗口是否处于人脸区域的部分。最后,对判断得到的属于人脸区域部分的子窗口进行归类即可得到初始可见光图像中的人脸区域。如此,若初始可见光图像中的各个子窗口经由训练好的人脸分类器分类后均被归类为非人 脸区域的部分,则无法在初始可见光图像中获取到人脸区域,从而表明初始可见光图像中不存在人脸。
LBP(Local Binary Pattern,局部二值模式)特征是用来描述图像局部纹理特征的算子,它具有旋转不变性和灰度不变性的显著优点。LBP特征定义为在可调大小的窗口内,以窗口的中心像素为阈值,将相邻的多个像素的灰度值与中心像素的灰度值进行比较,若周围像素的灰度值大于中心像素的灰度值,则对应像素的位置被标记为1,反之则标记为0。如此,每个窗口的中心像素的LBP值即可用来反映该窗口所在区域的纹理信息。在人脸识别过程中,首先计算出初始可见光图像中每个像素的LBP值。由于每个像素作为中心像素时均对应一个窗口,因此,在计算出每个像素的LBP值后随即计算每一个窗口对应的LBP值的统计直方图,随后将多个窗口的统计直方图进行连接成为一个特征向量即可得到初始可见光图像的LBP纹理特征向量。最后,可利用支持向量机(Support Vector Machines,SVM)根据初始可见光图像的LBP纹理特征向量进行判断以判断初始可见光图像中是否存在人脸。
在深度相机400为前置相机或者后置相机时,若初始可见光图像中存在人脸,则处理器300根据初始可见光图像和深度图像计算出人与结构光投射器100之间的距离,并根据距离调整结构光投射器100的发光功率从而避免结构光投射器100的发光功率过高对用户眼睛造成伤害的问题。
在深度相机400为前置相机时,若初始可见光图像中不存在人脸,则认为深度相机400未处于使用状态。此时,处理器300关闭深度相机400以减小电子装置800的功耗。
本申请实施方式的控制方法可以根据人脸存在与否的情况来调整深度相机400的发光功率,在存在人脸时根据人脸与结构光投射器100之间的距离调整结构光投射器100的发光功率,从而避免结构光投射器100的发光功率过高对用户眼睛造成伤害的问题,提升电子装置800使用的安全性。
请一并参阅图3和图5,在某些实施方式中,步骤01获取场景的深度图像和初始可见光图像包括:
011:控制结构光投射器100以初始发光功率向场景投射激光图案;
012:获取经场景调制后的激光图案;和
013:处理激光图案以获得深度图像。
请一并参阅图3和图6,在某些实施方式中,获取模块11包括第一控制单元111、第一获取单元112和处理单元113。步骤011可以由第一控制单元111实现。步骤012可以由第一获取单元112实现。步骤013可以由处理单元113实现。也即是说,第一控制单元111可用于控制结构光投射器100以初始发光功率向场景投射激光图案。第一获取单元112可用于获取经场景调制后的激光图案。处理单元113可用于处理激光图案以获得深度图像。
请再参阅图3,在某些实施方式中,步骤011、步骤012、步骤013均可以由处理器300实现。也即是说,处理器300还可用于控制结构光投射器100以初始发光功率向场景投射激光图案、获取经场景调制后的激光图案、以及处理激光图案以获得深度图像。
其中,初始发光功率可以是通过前期多次实验标定得到,并存储在电子装置800的存储器600中。结构光投射器100以初始发光功率发射结构光时,无论用户与结构光投射器100之间的距离多近,均能保障激光的能量不会对用户的眼睛造成伤害。
结构光投射器100开启后,可以向场景投射带有多个斑点的激光图案。由于场景中各 物体与结构光投射器100之间的距离不同,投射到物体上激光图案会因为物体表面高度的不同而被调制,并使得激光图案中的散斑点发生偏移。结构光摄像头200采集散斑点偏移后的激光图案,即被物体调制后的激光图案。存储器600中存储有参考图案,处理器300可以根据激光图案中斑点相对于参考图案中的斑点的偏移量计算得到多个像素的深度数据,多个带有深度数据的像素即可构成一幅深度图像。
如此,本申请实施方式的控制方法可以借助深度相机400测得场景中物体的深度数据,以形成深度图像。并且,结构光投射器100是以相对较低的初始发光功率投射激光图案的,因此,不会对用户的眼睛造成伤害,用户使用电子装置800的安全性较高。
请一并参阅图3、图7和图8,在某些实施方式中,初始可见光图像具有第一分辨率,深度图像具有第三分辨率。步骤03在初始可见光图像中存在人脸时,根据初始可见光图像和深度图像计算人脸与结构光投射器100之间的距离包括:
031:将初始可见光图像转换成具有第二分辨率的中间可见光图像,第一分辨率大于第二分辨率;
032:识别中间可见光图像中的可见光人脸检测区域;
033:根据中间可见光图像和深度图像之间的映射关系在深度图像中获取与可见光人脸检测区域对应的深度人脸检测区域;和
034:在深度人脸检测区域中选取值最小的深度数据作为人脸与结构光投射器100之间的距离。
其中,步骤033进一步包括:
0331:计算第三分辨率与第二分辨率的比值以得到映射比值;
0332:根据可见光人脸检测区域中第一原点像素的坐标值和映射比值在深度图像中确定出第二原点像素;
0333:根据可见光人脸检测区域的第一宽度和第一高度以及映射比值获取深度人脸检测区域的第二宽度和第二高度;和
0334:根据第二原点像素、第二宽度和第二高度获取深度人脸检测区域。
请一并参阅图3、图9和图10,在某些实施方式中,计算模块13包括转换单元131、识别单元132、第二获取单元133和选取单元134。第二获取单元133包括计算子单元1331、确定子单元1332、第一获取子单元1333和第二获取子单元1334。步骤031可以由转换单元131实现。步骤032可以由识别单元132实现。步骤033可以由第二获取单元133实现。步骤034可以由选取单元134实现。步骤0331可以由计算子单元1331实现。步骤0332可以由确定子单元1332实现。步骤0333可以由第一获取子单元1333实现。步骤0334可以由第二获取子单元1334实现。
也即是说,转换单元131可用于将初始可见光图像转换成具有第二分辨率的中间可见光图像,第一分辨率大于第二分辨率。识别单元132可用于识别中间可见光图像中的可见光人脸检测区域。第二获取单元133可用于根据中间可见光图像和深度图像之间的映射关系在深度图像中获取与可见光人脸检测区域对应的深度人脸检测区域。选取单元134可用于在深度人脸检测区域中选取值最小的深度数据作为人脸与结构光投射器100之间的距离。计算子单元1331可用于将初始可见光图像转换成具有第二分辨率的中间可见光图像,第一分辨率大于第二分辨率。确定子单元1332可用于根据可见光人脸检测区域中第一原点像素 的坐标值和映射比值在深度图像中确定出第二原点像素。第一获取子单元1333可用于根据可见光人脸检测区域的第一宽度和第一高度以及映射比值获取深度人脸检测区域的第二宽度和第二高度。第二获取子单元1334可用于根据第二原点像素、第二宽度和第二高度获取深度人脸检测区域。
请再参阅图3,在某些实施方式中,步骤031、步骤032、步骤033、步骤0331、步骤0332、步骤0333、步骤03334和步骤034均可以由处理器300实现。也即是说,处理器300可用于将初始可见光图像转换成具有第二分辨率的中间可见光图像、识别中间可见光图像中的可见光人脸检测区域、根据中间可见光图像和深度图像之间的映射关系在深度图像中获取与可见光人脸检测区域对应的深度人脸检测区域、以及在深度人脸检测区域中选取值最小的深度数据作为人脸与结构光投射器100之间的距离。处理器300在执行步骤033时具体执行计算第三分辨率与第二分辨率的比值以得到映射比值、根据可见光人脸检测区域中第一原点像素的坐标值和映射比值在深度图像中确定出第二原点像素、根据可见光人脸检测区域的第一宽度和第一高度以及映射比值获取深度人脸检测区域的第二宽度和第二高度、以及根据第二原点像素、第二宽度和第二高度获取深度人脸检测区域的操作。
具体地,第一分辨率指的是初始可见光图像拍摄后的初始的分辨率。一般地,初始可见光图像的第一分辨率均较高。而深度图像的第三分辨率通常要小于初始可见光图像的第一分辨率。为了便于在深度图像中找到人脸的深度数据以及减少处理器300所需处理的数据量,处理器300首先要将第一分辨率的初始可见光图像进行压缩转换得到具有第二分辨率的中间可见光图像,例如,将第一分辨率的初始可见光图像压缩成640×480的第二分辨率的中间可见光图像。随后,处理器300识别出中间可见光图像中的可见光人脸检测区域。通常,处理器300识别到的人脸区域指的是包含额头、眉毛、眼睛、脸颊、耳朵、鼻子、嘴巴、下巴等部位的区域,处理器300识别到人脸区域后需要对人脸区域进行裁剪以得到可见光人脸检测区域。具体地,处理器300进一步识别人脸区域中的眉毛和下巴,并以眉毛和下巴为界限裁剪出可见光人脸检测区域。其中,可见光人脸检测区域可为矩形、方形、圆形等,在此不作限定。在本申请的具体实施例中,可见光人脸检测区域为矩形。处理器300将可见光人脸检测区域中左上角的像素作为第一原点像素,第一原点像素的坐标标记为(left,top),(left,top)的值对应的是在初始可见光图像的像素坐标系中第一原点像素的像素坐标的值。处理器300找出可见光人脸检测区域中右下角像素的像素坐标值(right,bottom),并根据坐标(left,top)和坐标(right,bottom)计算出可见光人脸检测区域的第一宽度width和第一高度height。最后,处理器300利用(left,top,width,height)来定义中间可见光图像中可见光人脸检测区域的位置和大小。
随后,处理器300可根据可见光人脸检测区域在深度图像中找到与可见光人脸检测区域相对应的深度人脸检测区域。以深度图像的第三分辨率为640×400为例,处理器300首先需要计算出可见光人脸检测区域与深度人脸检测区域的映射比值,其中,映射比值=第三分辨率/第二分辨率=(640×400)/(640×480)=400/480=5/6。随后,处理器300根据映射比值和第一原点像素的坐标值(left,top)计算出深度人脸检测区域中左上角像素(即第二原点像素)在深度图像中的坐标值(left’,top’),其中,left’=(left×5)/6,top’=(top×5)/6,如此,处理器300即可根据坐标值(left’,top’)确定出第二原点像素的位置。随后,处理器300根据可见光人脸检测区域的第一宽度width及映射比值计算深度人脸检测区域的第二宽度 width’,即width’=(width×5)/6,根据可见光人脸检测区域的第一高度height及映射比值计算深度人脸检测区域的第二高度height’,即height’=(height×5)/6。如此,处理器300可利用(left’,top’,width’,height’)在深度图像中确定出深度人脸检测区域的位置和大小。
最后,处理器300从深度人脸检测区域的多个深度数据中选取值最小的深度数据作为人脸与结构光投射器100之间的距离。最小的深度数据指示的是人脸中最靠近结构光投射器100的点与结构光投射器100的距离,相比于取深度数据的中值或均值来调整结构光投射器100的发光功率,基于最小值来调整结构光投射器100的发光功率可以进一步保障投射出的激光的能量不会过高,提升用户的用眼安全。
本申请实施方式的控制方法一方面通过分辨率的压缩可以减小处理器300所需处理的数据量,另一方面根据第二分辨率与第三分辨率之间的比值关系可以在深度图像中确定出与可见光人脸检测区域对应的深度人脸检测区域,从而得到人脸的深度数据,通过此种方式可以获取到人脸与结构光投射器100之间的距离,方便后续的结构光投射器100的发光功率的调整。
请一并参阅图3和图11,在某些实施方式中,在某些实施方式中,步骤04根据距离调整结构光投射器100的发光功率包括:
041:在距离小于第一预设距离且大于第二预设距离时,根据距离获取结构光投射器100的发光功率,第二预设距离小于第一预设距离;
042:控制结构光投射器100根据发光功率发光;
043:在距离大于第一预设距离或小于第二预设距离时,控制结构光投射器100关闭。
请一并参阅图3和图12,在某些实施方式中,调整模块14包括第三获取单元141、第二控制单元142和第三控制单元143。步骤041可以由第三获取单元141实现。步骤042可以由第二控制单元142实现。步骤043可以由第三控制单元143实现。也即是说,第三获取单元141可用于在距离小于第一预设距离且大于第二预设距离时,根据距离获取结构光投射器100的发光功率,第二预设距离小于第一预设距离。第二控制单元142可用于控制结构光投射器100根据发光功率发光。第三控制单元143可用于在距离大于第一预设距离或小于第二预设距离时,控制结构光投射器100关闭。
请再参阅图3,在某些实施方式中,步骤041、步骤042、步骤043均可以由处理器300实现。也即是说,处理器300还可用于在距离小于第一预设距离且大于第二预设距离时根据距离获取结构光投射器100的发光功率、控制结构光投射器100根据发光功率发光、以及在距离大于第一预设距离或小于第二预设距离时,控制结构光投射器100关闭。其中,第二预设距离小于第一预设距离。
具体地,当人脸与结构光投射器100之间的距离小于第二预设距离(例如2厘米、3.5厘米、4厘米、5厘米等)时,此时认为用户与结构光投射器100之间的距离太近,因此,处理器300可直接关闭结构光投射器100以减小激光对人眼的伤害。另外,人脸与结构光投射器100之间的距离小于第二预设距离时,深度相机400通常也无法获取到用户完整的脸部的深度数据,此时直接关闭结构光投射器100也可以减小电子装置800的功耗。
当人脸与结构光投射器100之间的距离大于第二预设距离且小于第一预设距离(例如50厘米、53厘米、55.9厘米、58厘米、100厘米等)时,此时处理器300根据人脸与结构光投射器100之间的距离调节结构光投射器100的驱动电流以使结构光投射器100根据目 标的发光功率发光,从而满足深度数据的获取精度需求,还能保证用户的人眼安全。一般地,当距离增大时,发光功率也随之增大;距离减小时,发光功率也随之减小。距离与发光功率之间的对应关系可为:(1)距离为一个值,发光功率也为一个值,距离与发光功率一一对应;(2)距离为一个范围,发光功率为一个值,距离与发光功率一一对应。距离与发光功率之间的对应关系可为前期经过大量实验标定得到,标定出的对应关系存储在存储器600中。结构光投射器100工作时,处理器300基于计算出的距离从存储器600存储的对应关系中找到与当前的距离对应的发光功率,并控制结构光投射器100按照该发光功率发光。
当深度相机400为前置相机时,若人脸与结构光投射器100之间的距离大于第一预设距离,此时认为用户并未在使用结构光投射器100,处理器300可以直接关闭结构光投射器100以减小电子装置800的功耗。
请一并参阅图3和图13,在某些实施方式中,结构光投射器100的光源101包括多个发光区域102,每个发光区域102可以独立控制。当人脸与结构光投射器100之间的距离大于第二预设距离且小于第一预设距离时,处理器300除了调节结构光投射器100的驱动电流以使结构光投射器100根据目标的发光功率发光之外,还可以控制光源101开启不同数量的发光区域102以使结构光投射器100根据目标的发光功率发光。例如,当距离为10厘米时,开启光源101的两个发光区域102;当距离为20厘米,开启光源101的4个发光区域102等等。其中,发光区域102的形状可以是扇形(如图13所示)、环形、回字形等,在此不作限定。
当发光区域102未全部开启时,开启的那部分发光区域102环绕光源101的中心点呈中心对称分布(如图13所示),从而可以提升结构光投射器100发出的激光的亮度的均匀性,提升深度数据的获取精度。
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实 现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。例如,如果用硬件来实现,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (20)

  1. 一种结构光投射器的控制方法,其特征在于,所述控制方法包括:
    获取场景的深度图像和初始可见光图像;
    判断所述初始可见光图像中是否存在人脸;
    在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;和
    根据所述距离调整所述结构光投射器的发光功率。
  2. 根据权利要求1所述的控制方法,其特征在于,所述获取场景的深度图像和初始可见光图像的步骤包括:
    控制所述结构光投射器以初始发光功率向所述场景投射激光图案;
    获取经所述场景调制后的激光图案;和
    处理所述激光图案以获得所述深度图像。
  3. 根据权利要求1所述的控制方法,其特征在于,所述初始可见光图像具有第一分辨率,所述深度图像具有第三分辨率,所述根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离的步骤包括:
    将所述初始可见光图像转换成具有第二分辨率的中间可见光图像,所述第一分辨率大于所述第二分辨率;
    识别所述中间可见光图像中的可见光人脸检测区域;
    根据所述中间可见光图像和所述深度图像之间的映射关系在所述深度图像中获取与所述可见光人脸检测区域对应的深度人脸检测区域;和
    在所述深度人脸检测区域中选取值最小的深度数据作为所述人脸与所述结构光投射器之间的距离。
  4. 根据权利要求3所述的控制方法,其特征在于,所述根据所述中间可见光图像和所述深度图像之间的映射关系在所述深度图像中获取与所述可见光人脸检测区域对应的深度人脸检测区域的步骤包括:
    计算所述第三分辨率与所述第二分辨率的比值以得到映射比值;
    根据所述可见光人脸检测区域中第一原点像素的坐标值和所述映射比值在所述深度图像中确定出第二原点像素;
    根据所述可见光人脸检测区域的第一宽度和第一高度以及所述映射比值获取所述深度人脸检测区域的第二宽度和第二高度;和
    根据所述第二原点像素、所述第二宽度和所述第二高度获取所述深度人脸检测区域。
  5. 根据权利要求1所述的控制方法,其特征在于,所述根据所述距离调整所述结构光投射器的发光功率的步骤包括:
    在所述距离小于第一预设距离且大于第二预设距离时,根据所述距离获取所述结构光投射器的发光功率,所述第二预设距离小于所述第一预设距离;
    控制所述结构光投射器根据所述发光功率发光;
    在所述距离大于所述第一预设距离或小于所述第二预设距离时,控制所述结构光投射器关闭。
  6. 一种结构光投射器的控制装置,其特征在于,所述控制装置包括:
    获取模块,所述获取模块用于获取场景的深度图像和初始可见光图像;
    判断模块,所述判断模块用于判断所述初始可见光图像中是否存在人脸;
    计算模块,所述计算模块用于在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;和
    调整模块,所述调整模块用于根据所述距离调整所述结构光投射器的发光功率。
  7. 根据权利要求6所述的控制装置,其特征在于,所述获取模块包括:
    第一控制单元,所述第一控制单元用于控制所述结构光投射器以初始发光功率向所述场景投射激光图案;
    第一获取单元,所述第一获取单元用于获取经所述场景调制后的激光图案;和
    处理单元,所述处理单元用于处理所述激光图案以获得所述深度图像。
  8. 根据权利要求6所述的控制装置,其特征在于,所述初始可见光图像具有第一分辨率,所述深度图像具有第三分辨率,所述计算模块包括:
    转换单元,所述转换单元用于将所述初始可见光图像转换成具有第二分辨率的中间可见光图像,所述第一分辨率大于所述第二分辨率;
    识别单元,所述识别单元用于识别所述中间可见光图像中的可见光人脸检测区域;
    第二获取单元,所述第二获取单元用于根据所述中间可见光图像和所述深度图像之间的映射关系在所述深度图像中获取与所述可见光人脸检测区域对应的深度人脸检测区域;和
    选取单元,所述选取单元用于在所述深度人脸检测区域中选取值最小的深度数据作为所述人脸与所述结构光投射器之间的距离。
  9. 根据权利要求8所述的控制装置,其特征在于,所述第二获取单元包括:
    计算子单元,所述计算子单元用于计算所述第三分辨率与所述第二分辨率的比值以得到映射比值;
    确定子单元,所述确定子单元用于根据所述可见光人脸检测区域中第一原点像素的坐标值和所述映射比值在所述深度图像中确定出第二原点像素;
    第一获取子单元,所述第一获取子单元用于根据所述可见光人脸检测区域的第一宽度和第一高度以及所述映射比值获取所述深度人脸检测区域的第二宽度和第二高度;和
    第二获取子单元,所述第二获取子单元用于根据所述第二原点像素、所述第二宽度和所述第二高度获取所述深度人脸检测区域。
  10. 根据权利要求6所述的控制装置,其特征在于,所述调整模块包括:
    第三获取单元,所述第三获取单元用于在所述距离小于第一预设距离且大于第二预设距离时,根据所述距离获取所述结构光投射器的发光功率,所述第二预设距离小于所述第一预设距离;
    第二控制单元,所述第二控制单元用于控制所述结构光投射器根据所述发光功率发光;
    第三控制单元,所述第三控制单元用于在所述距离大于所述第一预设距离或小于所述第二预设距离时,控制所述结构光投射器关闭。
  11. 一种深度相机,其特征在于,所述深度相机包括结构光投射器和处理器,所述处理器用于:
    获取场景的深度图像和初始可见光图像;
    判断所述初始可见光图像中是否存在人脸;
    在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;和
    根据所述距离调整所述结构光投射器的发光功率。
  12. 根据权利要求11所述的深度相机,其特征在于,所述处理器还用于:
    控制所述结构光投射器以初始发光功率向所述场景投射激光图案;
    获取经所述场景调制后的激光图案;和
    处理所述激光图案以获得所述深度图像。
  13. 根据权利要求11所述的深度相机,其特征在于,所述初始可见光图像具有第一分辨率,所述深度图像具有第三分辨率,所述处理器还用于:
    将所述初始可见光图像转换成具有第二分辨率的中间可见光图像,所述第一分辨率大于所述第二分辨率;
    识别所述中间可见光图像中的可见光人脸检测区域;
    根据所述中间可见光图像和所述深度图像之间的映射关系在所述深度图像中获取与所述可见光人脸检测区域对应的深度人脸检测区域;和
    在所述深度人脸检测区域中选取值最小的深度数据作为所述人脸与所述结构光投射器之间的距离。
  14. 根据权利要求13所述的深度相机,其特征在于,所述处理器还用于:
    计算所述第三分辨率与所述第二分辨率的比值以得到映射比值;
    根据所述可见光人脸检测区域中第一原点像素的坐标值和所述映射比值在所述深度图像中确定出第二原点像素;
    根据所述可见光人脸检测区域的第一宽度和第一高度以及所述映射比值获取所述深度人脸检测区域的第二宽度和第二高度;和
    根据所述第二原点像素、所述第二宽度和所述第二高度获取所述深度人脸检测区域。
  15. 根据权利要求11所述的深度相机,其特征在于,所述处理器还用于:
    在所述距离小于第一预设距离且大于第二预设距离时,根据所述距离获取所述结构光投射器的发光功率,所述第二预设距离小于所述第一预设距离;
    控制所述结构光投射器根据所述发光功率发光;
    在所述距离大于所述第一预设距离或小于所述第二预设距离时,控制所述结构光投射器关闭。
  16. 一种电子装置,其特征在于,所述电子装置包括:
    壳体;和
    深度相机,所述深度相机设置在所述壳体上,所述深度相机包括结构光投射器和处理器,所述处理器用于:
    获取场景的深度图像和初始可见光图像;
    判断所述初始可见光图像中是否存在人脸;
    在所述初始可见光图像中存在所述人脸时,根据所述初始可见光图像和所述深度图像计算所述人脸与所述结构光投射器之间的距离;和
    根据所述距离调整所述结构光投射器的发光功率。
  17. 根据权利要求16所述的电子装置,其特征在于,所述处理器还用于:
    控制所述结构光投射器以初始发光功率向所述场景投射激光图案;
    获取经所述场景调制后的激光图案;和
    处理所述激光图案以获得所述深度图像。
  18. 根据权利要求16所述的电子装置,其特征在于,所述初始可见光图像具有第一分辨率,所述深度图像具有第三分辨率,所述处理器还用于:
    将所述初始可见光图像转换成具有第二分辨率的中间可见光图像,所述第一分辨率大于所述第二分辨率;
    识别所述中间可见光图像中的可见光人脸检测区域;
    根据所述中间可见光图像和所述深度图像之间的映射关系在所述深度图像中获取与所述可见光人脸检测区域对应的深度人脸检测区域;和
    在所述深度人脸检测区域中选取值最小的深度数据作为所述人脸与所述结构光投射器之间的距离。
  19. 根据权利要求18所述的电子装置,其特征在于,所述处理器还用于:
    计算所述第三分辨率与所述第二分辨率的比值以得到映射比值;
    根据所述可见光人脸检测区域中第一原点像素的坐标值和所述映射比值在所述深度图像中确定出第二原点像素;
    根据所述可见光人脸检测区域的第一宽度和第一高度以及所述映射比值获取所述深度人脸检测区域的第二宽度和第二高度;和
    根据所述第二原点像素、所述第二宽度和所述第二高度获取所述深度人脸检测区域。
  20. 根据权利要求16所述的电子装置,其特征在于,所述处理器还用于:
    在所述距离小于第一预设距离且大于第二预设距离时,根据所述距离获取所述结构光投射器的发光功率,所述第二预设距离小于所述第一预设距离;
    控制所述结构光投射器根据所述发光功率发光;
    在所述距离大于所述第一预设距离或小于所述第二预设距离时,控制所述结构光投射器关闭。
PCT/CN2019/075380 2018-10-09 2019-02-18 控制方法、控制装置、深度相机和电子装置 WO2020073572A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811180890.3 2018-10-09
CN201811180890.3A CN109194869A (zh) 2018-10-09 2018-10-09 控制方法、控制装置、深度相机和电子装置

Publications (1)

Publication Number Publication Date
WO2020073572A1 true WO2020073572A1 (zh) 2020-04-16

Family

ID=64947978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/075380 WO2020073572A1 (zh) 2018-10-09 2019-02-18 控制方法、控制装置、深度相机和电子装置

Country Status (5)

Country Link
US (1) US10880539B2 (zh)
EP (1) EP3637367B1 (zh)
CN (1) CN109194869A (zh)
TW (1) TWI699707B (zh)
WO (1) WO2020073572A1 (zh)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109194869A (zh) * 2018-10-09 2019-01-11 Oppo广东移动通信有限公司 控制方法、控制装置、深度相机和电子装置
KR102431989B1 (ko) * 2018-10-10 2022-08-12 엘지전자 주식회사 3차원 영상 생성 장치 및 방법
WO2020237657A1 (zh) * 2019-05-31 2020-12-03 Oppo广东移动通信有限公司 电子设备的控制方法、电子设备和计算机可读存储介质
EP3975034A4 (en) * 2019-05-31 2022-06-15 Guangdong Oppo Mobile Telecommunications Corp., Ltd. CONTROL METHOD FOR ELECTRONIC DEVICE, ELECTRONIC DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
CN110462438A (zh) * 2019-06-24 2019-11-15 深圳市汇顶科技股份有限公司 结构光投射装置、结构光投射方法及三维测量系统
JP1669255S (zh) * 2019-09-20 2020-09-28
CN110738142B (zh) * 2019-09-26 2022-12-20 广州广电卓识智能科技有限公司 一种自适应改善人脸图像采集的方法、系统及存储介质
IT201900019634A1 (it) 2019-10-23 2021-04-23 Osram Gmbh Apparecchiatura di illuminazione, impianto, procedimento e prodotto informatico corrispondenti
CN113223209A (zh) * 2020-01-20 2021-08-06 深圳绿米联创科技有限公司 门锁控制方法、装置、电子设备及存储介质
CN111487632A (zh) * 2020-04-06 2020-08-04 深圳蚂里奥技术有限公司 一种激光安全控制装置及控制方法
CN111487633A (zh) * 2020-04-06 2020-08-04 深圳蚂里奥技术有限公司 一种激光安全控制装置及方法
CN111427049A (zh) * 2020-04-06 2020-07-17 深圳蚂里奥技术有限公司 一种激光安全装置及控制方法
CN112073708B (zh) * 2020-09-17 2022-08-09 君恒新信息科技(深圳)有限公司 一种tof相机光发射模组的功率控制方法及设备
CN113111762B (zh) * 2021-04-07 2024-04-05 瑞芯微电子股份有限公司 一种人脸识别方法、检测方法、介质及电子设备
CN113435342B (zh) * 2021-06-29 2022-08-12 平安科技(深圳)有限公司 活体检测方法、装置、设备及存储介质

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331517A (zh) * 2016-09-26 2017-01-11 维沃移动通信有限公司 一种柔光灯亮度控制方法及电子设备
CN107944422A (zh) * 2017-12-08 2018-04-20 业成科技(成都)有限公司 三维摄像装置、三维摄像方法及人脸识别方法
CN108281880A (zh) * 2018-02-27 2018-07-13 广东欧珀移动通信有限公司 控制方法、控制装置、终端、计算机设备和存储介质
CN108376252A (zh) * 2018-02-27 2018-08-07 广东欧珀移动通信有限公司 控制方法、控制装置、终端、计算机设备和存储介质
US20180227567A1 (en) * 2017-02-09 2018-08-09 Oculus Vr, Llc Polarization illumination using acousto-optic structured light in 3d depth sensing
CN108509867A (zh) * 2018-03-12 2018-09-07 广东欧珀移动通信有限公司 控制方法、控制装置、深度相机和电子装置
CN108564041A (zh) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 一种基于rgbd相机的人脸检测和修复方法
CN108615012A (zh) * 2018-04-27 2018-10-02 Oppo广东移动通信有限公司 距离提醒方法、电子装置和非易失性计算机可读存储介质
CN109194869A (zh) * 2018-10-09 2019-01-11 Oppo广东移动通信有限公司 控制方法、控制装置、深度相机和电子装置

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8290208B2 (en) * 2009-01-12 2012-10-16 Eastman Kodak Company Enhanced safety during laser projection
CN103870824B (zh) * 2014-03-28 2017-10-20 海信集团有限公司 一种在人脸检测跟踪过程中的人脸捕捉方法及装置
US10424103B2 (en) * 2014-04-29 2019-09-24 Microsoft Technology Licensing, Llc Display device viewer gaze attraction
US10567641B1 (en) * 2015-01-19 2020-02-18 Devon Rueckner Gaze-directed photography
CN104616438B (zh) * 2015-03-02 2016-09-07 重庆市科学技术研究院 一种用于疲劳驾驶检测的打哈欠动作检测方法
WO2017185170A1 (en) * 2016-04-28 2017-11-02 Intellijoint Surgical Inc. Systems, methods and devices to scan 3d surfaces for intra-operative localization
US20180217234A1 (en) * 2017-01-27 2018-08-02 4Sense, Inc. Diffractive Optical Element for a Time-of-Flight Sensor and Method of Operation of Same
TWI647661B (zh) * 2017-08-10 2019-01-11 緯創資通股份有限公司 影像深度感測方法與影像深度感測裝置
US10466360B1 (en) * 2017-08-31 2019-11-05 Facebook Technologies, Llc Depth measurement using scanning diffractive optical elements
CN108376251B (zh) * 2018-02-27 2020-04-17 Oppo广东移动通信有限公司 控制方法、控制装置、终端、计算机设备和存储介质
CN108508620B (zh) * 2018-03-12 2020-03-06 Oppo广东移动通信有限公司 激光投射模组的检测方法、检测装置和电子装置
CN108523819B (zh) 2018-03-20 2024-03-22 广东欧谱曼迪科技有限公司 测光反馈的荧光导航内窥镜系统及激光功率自动调整方法
CN108508795B (zh) * 2018-03-27 2021-02-02 百度在线网络技术(北京)有限公司 用于投影仪的控制方法和装置
CN108961195B (zh) * 2018-06-06 2021-03-23 Oppo广东移动通信有限公司 图像处理方法及装置、图像采集装置、可读存储介质和计算机设备
WO2019233199A1 (zh) * 2018-06-06 2019-12-12 Oppo广东移动通信有限公司 验证方法、验证装置及计算机可读存储介质

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106331517A (zh) * 2016-09-26 2017-01-11 维沃移动通信有限公司 一种柔光灯亮度控制方法及电子设备
US20180227567A1 (en) * 2017-02-09 2018-08-09 Oculus Vr, Llc Polarization illumination using acousto-optic structured light in 3d depth sensing
CN107944422A (zh) * 2017-12-08 2018-04-20 业成科技(成都)有限公司 三维摄像装置、三维摄像方法及人脸识别方法
CN108281880A (zh) * 2018-02-27 2018-07-13 广东欧珀移动通信有限公司 控制方法、控制装置、终端、计算机设备和存储介质
CN108376252A (zh) * 2018-02-27 2018-08-07 广东欧珀移动通信有限公司 控制方法、控制装置、终端、计算机设备和存储介质
CN108509867A (zh) * 2018-03-12 2018-09-07 广东欧珀移动通信有限公司 控制方法、控制装置、深度相机和电子装置
CN108564041A (zh) * 2018-04-17 2018-09-21 广州云从信息科技有限公司 一种基于rgbd相机的人脸检测和修复方法
CN108615012A (zh) * 2018-04-27 2018-10-02 Oppo广东移动通信有限公司 距离提醒方法、电子装置和非易失性计算机可读存储介质
CN109194869A (zh) * 2018-10-09 2019-01-11 Oppo广东移动通信有限公司 控制方法、控制装置、深度相机和电子装置

Also Published As

Publication number Publication date
EP3637367A1 (en) 2020-04-15
US10880539B2 (en) 2020-12-29
CN109194869A (zh) 2019-01-11
TWI699707B (zh) 2020-07-21
TW202014929A (zh) 2020-04-16
US20200112713A1 (en) 2020-04-09
EP3637367B1 (en) 2021-11-03

Similar Documents

Publication Publication Date Title
WO2020073572A1 (zh) 控制方法、控制装置、深度相机和电子装置
JP6847124B2 (ja) ミラーコンポーネント用の適応照明システム、及び適応照明システムを制御する方法
WO2023088304A1 (zh) 一种投影设备和投影区域修正方法
TWI714131B (zh) 控制方法、微處理器、電腦可讀記錄媒體及電腦設備
US10747995B2 (en) Pupil tracking device
US10217195B1 (en) Generation of semantic depth of field effect
US9836639B2 (en) Systems and methods of light modulation in eye tracking devices
WO2011001761A1 (ja) 情報処理装置、情報処理方法、プログラム及び電子装置
CN110308458B (zh) 调节方法、调节装置、终端及计算机可读存储介质
TW201940953A (zh) 拍攝方法、裝置、智慧型裝置及儲存媒體
US20150015699A1 (en) Apparatus, system and method for projecting images onto predefined portions of objects
US9978000B2 (en) Information processing device, information processing method, light-emitting device regulating apparatus, and drive current regulating method
CN112204961B (zh) 从动态视觉传感器立体对和脉冲散斑图案投射器进行半密集深度估计
TWI682315B (zh) 電子裝置以及指紋感測方法
US11113849B2 (en) Method of controlling virtual content, terminal device and computer readable medium
CN111214205B (zh) 控制发光器以获得最佳闪光
CN113132613A (zh) 一种摄像头补光装置、电子设备和补光方法
WO2019163066A1 (ja) なりすまし検知装置、なりすまし検知方法、及びコンピュータ読み取り可能な記録媒体
US20150373283A1 (en) Photographing system, photographing method, and computer-readable storage medium for computer program
WO2021114965A1 (zh) 一种投影显示控制方法、装置及投影系统
US20210258551A1 (en) Position detection method, position detection device, and display device
US20190045100A1 (en) Image processing device, method, and program
US9684828B2 (en) Electronic device and eye region detection method in electronic device
US20210294456A1 (en) Operation detection method, operation detection device, and display system
US20120081533A1 (en) Real-time embedded vision-based eye position detection

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19871241

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19871241

Country of ref document: EP

Kind code of ref document: A1