WO2019233106A1 - 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 - Google Patents
图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 Download PDFInfo
- Publication number
- WO2019233106A1 WO2019233106A1 PCT/CN2019/070853 CN2019070853W WO2019233106A1 WO 2019233106 A1 WO2019233106 A1 WO 2019233106A1 CN 2019070853 W CN2019070853 W CN 2019070853W WO 2019233106 A1 WO2019233106 A1 WO 2019233106A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- depth
- current
- image
- visible light
- field
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000007423 decrease Effects 0.000 claims description 14
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/251—Fusion techniques of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/803—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of input or preprocessed data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/207—Image signal generators using stereoscopic image cameras using a single 2D image sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/56—Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/30—Transforming light or analogous information into electric information
- H04N5/33—Transforming infrared radiation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present application relates to the field of image processing, and more particularly, to an image acquisition method, an image acquisition device, an image acquisition device, a non-volatile computer-readable storage medium, and a computer device.
- an image acquisition device for generating a three-dimensional image generally includes a visible light camera and an infrared (Infrared Radiation, IR) camera.
- the visible light camera is used to obtain the visible light image
- the infrared light camera is used to obtain the depth image
- the visible light image and the depth image are combined Get a three-dimensional image.
- Embodiments of the present application provide an image acquisition method, an image acquisition device, an image acquisition device, a non-volatile computer-readable storage medium, and a computer device.
- the image acquisition device includes a first acquisition module, a second acquisition module, a third acquisition module, a fourth acquisition module, a judgment module, and a prompting module.
- a first acquisition module is used to acquire a depth image of the current scene;
- a second acquisition module is used to acquire a visible light image of the current scene;
- a third acquisition module is used to acquire a target in the target scene according to the depth image and the visible light image
- a fourth acquisition module configured to acquire an overlapped area between a field of view area of the depth image and a field of view area of the visible light image at the current depth occupies the entire field of view area of the visible light image
- the judging module is used to judge whether the current coincidence degree is within a preset range;
- the reminder module is used to issue a reminder message for the target object to adjust when the current coincidence degree exceeds the preset range Its depth.
- An image acquisition device includes a depth camera module, a visible light camera, and a processor.
- the depth camera module is configured to obtain a depth image of the current scene;
- the visible light camera is configured to obtain a visible light image of the current scene;
- the processor is configured to obtain the current depth of the target object in the target scene according to the depth image and the visible light image.
- One or more non-transitory computer-readable storage media containing computer-executable instructions according to the embodiments of the present application when the computer-executable instructions are executed by one or more processors, cause the processors to execute the following images Obtaining steps: obtaining a depth image of the current scene; obtaining a visible light image of the current scene; obtaining a current depth of a target object in the target scene according to the depth image and the visible light image; obtaining the depth at the current depth
- the coincidence area between the field of view area of the image and the field of view area of the visible light image accounts for the current degree of coincidence of the entire field of view area of the visible light image; determining whether the current degree of coincidence is within a preset range; and When the current coincidence degree exceeds the preset range, a prompt message is issued for the target object to adjust its depth.
- the computer device includes a memory and a processor.
- the memory stores computer-readable instructions.
- the processor causes the processor to perform the following image acquisition steps: A depth image of the current scene; a visible light image of the current scene; a current depth of a target object in the target scene according to the depth image and the visible light image; a field of view of the depth image at the current depth
- the coincidence area between the area and the field of view of the visible light image occupies the current coincidence degree of the entire field of view of the visible light image; judging whether the current coincidence degree is within a preset range; and within the current coincidence degree When the preset range is exceeded, a prompt message is issued for the target object to adjust its depth.
- FIG. 1 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
- FIG. 2 is a schematic block diagram of an image acquisition device according to some embodiments of the present application.
- FIG. 3 is a schematic structural diagram of an image acquisition device according to some embodiments of the present application.
- FIG. 4 is a schematic diagram of an image acquisition device according to some embodiments of the present application.
- FIG. 5 is a schematic structural diagram of a computer device according to some embodiments of the present application.
- FIG. 6 is a schematic flowchart of an image acquisition method according to some embodiments of the present application.
- FIG. 7 is a schematic block diagram of an image acquisition device according to some embodiments of the present application.
- FIG. 8 is a schematic diagram of an image acquisition method according to some embodiments of the present application.
- FIG. 9 is a schematic block diagram of a computer-readable storage medium and a processor according to an embodiment of the present application.
- FIG. 10 is a schematic block diagram of a computer device according to an embodiment of the present application.
- an image acquisition method includes:
- 011 Get the depth image of the current scene
- 013 Obtain the current depth of the target object in the target scene according to the depth image and the visible light image;
- 014 Obtain the current coincidence degree of the overlap area between the field of view area of the depth image and the field of view area of the visible light image at the current depth.
- 015 determine whether the current coincidence degree is within a preset range
- the image acquisition method is applied to an image acquisition device 100.
- the image acquisition device 100 includes a visible light camera 30 and an infrared light camera 24.
- Step 014 includes:
- 0141 Obtain a depth image view based on the current depth, the field of view range of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L (see FIG. 8) between the visible light camera 30 and the infrared light camera 24 Overlapping and non-overlapping areas of the field area and the field of view area of the visible light image; and
- 0142 Calculate the ratio of the overlapping area of the field of view of the visible light image to the entire field of view of the visible light image to obtain the current degree of coincidence.
- the current coincidence degree increases as the depth of the target object increases; or / and when the current depth is the same, the current coincidence degree increases with the preset distance. Increase and decrease.
- the image acquisition method further includes: when the current coincidence degree is less than the minimum value of the preset range, sending a prompt message for the target object to increase its depth; and / or when the current coincidence degree is greater than the preset range; At the maximum value, a prompt message is issued for the target object to reduce its depth.
- an image acquisition device 10 includes a first acquisition module 11, a second acquisition module 12, a third acquisition module 13, a fourth acquisition module 14, a determination module 15, and a prompt module 16.
- the first acquisition module 11 is configured to acquire a depth image of a current scene.
- the second acquisition module 12 is configured to acquire a visible light image of the current scene.
- the third acquisition module 13 is configured to acquire the current depth of the target object in the target scene according to the depth image and the visible light image.
- the fourth obtaining module 14 is configured to obtain a current coincidence degree between the field of view area of the depth image and the field of view area of the visible light image at the current depth occupying the current field of view area of the entire visible light image.
- the determination module 15 is configured to determine whether the current coincidence degree is within a preset range.
- the prompt module 16 is configured to send a prompt message for the target object to adjust its depth when the current coincidence degree exceeds a preset range.
- the image acquisition device 10 is applied to the image acquisition device 100.
- the image acquisition device 100 includes a visible light camera 30 and an infrared light camera 24.
- the fourth acquisition module 14 includes an acquisition unit 141 and Computing unit 142.
- the obtaining unit 141 is configured to obtain the field of view area and the visible light image according to the field of view range of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L between the visible light camera 30 and the infrared light camera 24. Coincidence and non-overlap regions of the field of view.
- the calculation unit 142 is configured to calculate a ratio of a coincidence area of a field of view area of the visible light image to a field of view area of the entire visible light image to obtain a current coincidence degree.
- the current coincidence degree increases as the depth of the target object increases; or / and when the current depth is the same, the current coincidence degree increases with the preset distance. Increase and decrease.
- the prompting module 16 is further configured to issue a prompting message for the target object to increase its depth when the current coincidence degree is less than the minimum value of the preset range; and / or, at the current coincidence When the degree is greater than the maximum value of the preset range, a prompt message is issued for the target object to reduce its depth.
- an image acquisition device 100 includes a depth camera module 20, a visible light camera 30, and a processor 40.
- the depth camera module 20 is configured to obtain a depth image of the current scene.
- the visible light camera 30 is configured to obtain a visible light image of a current scene.
- the processor 40 is configured to: obtain the current depth of the target object in the target scene according to the depth image and the visible light image; acquire a coincident area between the field of view area of the depth image and the field of view area of the visible light image at the current depth, occupying the entire visible light image The current coincidence degree of the field of view area; determine whether the current coincidence degree is within a preset range; when the current coincidence degree exceeds the preset range, a prompt message is issued for the target object to adjust its depth.
- the depth camera module 20 includes an infrared light camera 24, and the processor 40 may be further configured to: according to the current depth, the field of view range of the visible light camera 30, and the field of view range of the infrared light camera 24 And a preset distance L between the visible light camera 30 and the infrared light camera 24 to obtain the coincident and non-overlapping areas of the field of view area of the depth image and the field of view area of the visible light image; and calculate the field of view area of the visible light image The ratio of the coincidence area to the field of view area of the entire visible light image to obtain the current coincidence degree.
- the current coincidence degree increases as the depth of the target object increases; or / and when the current depth is the same, the current coincidence degree increases with the preset distance. Increase and decrease.
- the processor 40 is further configured to: when the current coincidence degree is less than the minimum value of the preset range, issue a prompt message for the target object to increase its depth; and / or at the current coincidence When the degree is greater than the maximum value of the preset range, a prompt message is issued for the target object to reduce its depth.
- the processor 40 is caused to perform the following image acquisition steps:
- 011 Get the depth image of the current scene
- 014 Obtain the current coincidence degree of the overlap area between the field of view area of the depth image and the field of view area of the visible light image at the current depth.
- 015 determine whether the current coincidence degree is within a preset range
- the computer-readable storage medium 300 is applied to the image acquisition device 100.
- the image acquisition device 100 includes a visible light camera 30 and an infrared light camera 24.
- the processors 40 When being executed by the multiple processors 40, the processors 40 further perform the following steps:
- 0141 Obtain a depth image view based on the current depth, the field of view range of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L (see FIG. 8) between the visible light camera 30 and the infrared light camera 24. Overlapping and non-overlapping areas of the field area and the field of view area of the visible light image; and
- 0142 Calculate the ratio of the overlapping area of the field of view of the visible light image to the entire field of view of the visible light image to obtain the current degree of coincidence.
- the current coincidence degree increases as the depth of the target object increases; or / and when the current depth is the same, the current coincidence degree increases with the preset distance. Increase and decrease.
- the processor 40 when the computer-executable instructions 302 are executed by one or more processors 40, the processor 40 further executes the following steps: when the current coincidence degree is less than the minimum value of the preset range , Sending a prompt message for the target object to increase its depth; and / or when the current coincidence is greater than the maximum value of the preset range, sending a prompt message for the target object to reduce its depth.
- a computer device 1000 includes a memory 110 and a processor 40.
- the memory 110 stores computer-readable instructions 111.
- the processor 40 executes the following Image acquisition steps:
- 011 Get the depth image of the current scene
- 013 Obtain the current depth of the target object in the target scene according to the depth image and the visible light image;
- 014 Obtain the current coincidence degree of the overlap area between the field of view area of the depth image and the field of view area of the visible light image at the current depth.
- 015 determine whether the current coincidence degree is within a preset range
- the computer device 1000 includes a visible light camera 30 and an infrared light camera 24, and when the computer-readable instruction 111 is executed by the processor 40, the processor 40 further performs the following steps:
- 0141 Obtain a depth image view based on the current depth, the field of view range of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L (see FIG. 8) between the visible light camera 30 and the infrared light camera 24. Overlapping and non-overlapping areas of the field area and the field of view area of the visible light image; and
- 0142 Calculate the ratio of the overlapping area of the field of view of the visible light image to the entire field of view of the visible light image to obtain the current degree of coincidence.
- the current coincidence degree increases as the depth of the target object increases; or / and when the current depth is the same, the current coincidence degree increases with the preset distance. Increase and decrease.
- the processor 40 when the computer-readable instruction 111 is executed by the processor 40, the processor 40 further executes the following steps: when the current coincidence degree is less than the minimum value of the preset range, a prompt message is sent to For the target object to increase its depth; and / or when the current coincidence is greater than the maximum value of the preset range, a prompt message is issued for the target object to reduce its depth.
- the image acquisition method includes:
- 011 Get the depth image of the current scene
- 013 Obtain the current depth of the target object in the target scene according to the depth image and the visible light image;
- 014 Obtain the current coincidence degree of the overlap area between the field of view area of the depth image and the field of view area of the visible light image at the current depth.
- 015 determine whether the current coincidence degree is within a preset range
- the present application further provides an image acquisition device 10.
- the image acquisition device 10 includes a first acquisition module 11, a second acquisition module 12, a third acquisition module 13, a fourth acquisition module 14, a determination module 15, and a prompting module 16.
- the first acquisition module 11 is configured to acquire a depth image of a current scene.
- the second acquisition module 12 is configured to acquire a visible light image of the current scene.
- the third acquisition module 13 is configured to acquire the current depth of the target object in the target scene according to the depth image and the visible light image.
- the fourth obtaining module 14 is configured to obtain a current coincidence degree between the field of view area of the depth image and the field of view area of the visible light image at the current depth occupying the current field of view area of the entire visible light image.
- the determination module 15 is configured to determine whether the current coincidence degree is within a preset range.
- the prompt module 16 is configured to send a prompt message for the target object to adjust its depth when the current coincidence degree exceeds a preset range.
- the present application further provides an image acquisition device 100.
- the image acquisition device 100 includes a depth camera module 20, a visible light camera 30, and a processor 40.
- the depth camera module 20 is configured to obtain a depth image of the current scene.
- the visible light camera 30 is configured to obtain a visible light image of a current scene.
- the processor 40 is configured to: obtain the current depth of the target object in the target scene according to the depth image and the visible light image; acquire a coincident area between the field of view area of the depth image and the field of view area of the visible light image at the current depth, occupying the entire visible light image
- the current coincidence degree of the field of view area determine whether the current coincidence degree is within a preset range; when the current coincidence degree exceeds the preset range, a prompt message is issued for the target object to adjust its depth. That is, step 011 can be implemented by the depth camera module 20, step 012 can be implemented by the visible light camera 30, and steps 013 to 016 can be implemented by the processor 40.
- the image acquisition device 100 may be a front image acquisition device 100 or a rear image acquisition device 100.
- the depth camera module 20 is a structured light camera module, and includes a structured light projector 22 and an infrared light camera 24.
- the structured light projector 22 projects an infrared light pattern into a target scene.
- the infrared light camera 24 collects an infrared light pattern modulated by the target object 200 (shown in FIG. 4).
- the processor 40 uses an image matching algorithm to calculate a depth image of the infrared light pattern.
- the image capture device 100 includes the depth camera module 20
- the image capture device 100 also includes a visible light camera 30, which is used to obtain a visible light image of the target scene, and the visible light image includes color information of objects in the target scene.
- the depth camera module 20 may be a TOF sensor module.
- the TOF sensor module includes a laser projector 22 and an infrared light camera 24.
- the laser projector 22 projects uniform light onto the target scene.
- the infrared light camera 24 receives the reflected light and records the time point when the light is emitted and the time point when the light is received.
- the processor 40 calculates the time between the time point when the light is emitted and the time point when the light is received. The time difference and the speed of light are used to calculate the depth pixel values corresponding to the objects in the target scene and combine multiple depth pixel values to obtain a depth image.
- the image acquisition device 100 includes a TOF sensor module
- the image acquisition device 100 also includes a visible light camera 30 for acquiring a visible light image of the target scene, and the visible light image includes color information of objects in the target scene.
- the overlapping area between the field of view of the depth image and the field of view of the visible light image at the current depth is also the area where the field of view of the infrared light camera 24 and the field of view of the visible light camera 30 overlap at the current depth.
- the non-coincidence area includes the non-coincidence area of the field of view area of the visible light image and the non-coincidence area of the field of view area of the depth image.
- the non-coincidence area of the field of view area of the visible light image has only the scene that can be captured by the visible light camera 30 without A scene that can be captured by the infrared light camera 24; a non-overlapping area of the field of view of the depth image has only scenes that can be captured by the infrared light camera 24, and no scenes that can be captured by the visible light camera 30.
- the current coincidence degree refers to the ratio of the coincidence area between the field of view area of the depth image and the field of view area of the visible light image to the field of view area of the entire visible light image at the current depth.
- the image acquisition device 100 according to the embodiment of the present application can be applied to the computer device 1000 according to the embodiment of the present application, that is, the computer device 1000 according to the embodiment of the present application can include the image acquisition device according to the embodiment of the present application.
- the image acquisition device 10 (shown in FIG. 2) may be provided in the computer device 1000.
- the computer device 1000 includes a mobile phone, a tablet computer, a notebook computer, a smart bracelet, a smart watch, a smart helmet, smart glasses, and the like. In the implementation manner of the present application, the computer device 1000 is used as an example for description. It can be understood that the specific form of the computer device 1000 is not limited to a mobile phone.
- the image acquisition method of the present application can be applied to application scenarios of face recognition such as selfie, face unlock, face encryption, face payment, etc.
- the target object is the user Human face.
- the visible light camera 30 and the infrared ((Infrared Radiation, IR) camera 24 installation position are at a certain distance, resulting in the field of view of the visible light camera 30 and the infrared light camera.
- IR Infrared Radiation
- the field of view range of 24 does not overlap, especially when the user is too close to the depth camera, which will cause the human face to exceed the area of overlap of the field of view of the infrared camera 24 and the field of view of the visible light camera 30.
- a prompt message is issued for the target object to increase its depth; and / or, when the current coincidence degree is greater than the maximum value of the preset range When the value is reached, a prompt message is issued for the target object to reduce its depth.
- the prompt module 16 is further configured to issue a prompt message for the target object to increase its depth when the current coincidence degree is less than the minimum value of the preset range; and And / or, when the current coincidence degree is greater than the maximum value of the preset range, a prompt message is issued for the target object to reduce its depth.
- the preset range is [80%, 90%].
- the depth between the face and the depth camera image acquisition device 10, image acquisition device 100, or computer device 1000
- the coincidence degree is 85%.
- the mobile phone depth camera
- the mobile phone does not need to issue a prompt message, and the user does not need to make depth adjustments.
- the current coincidence is less than 80%, it means that the distance between the current face and the mobile phone (depth camera) is too close.
- the depth between the face and the mobile phone (depth camera) is 20cm, and the current coincidence is 65%.
- the depth camera can only cover part of the face, and at this current distance, the mobile phone (depth camera) can only collect the depth data of part of the face. Therefore, the mobile phone sends a prompt message to let the user increase the user The current distance from the phone.
- the depth between the face and the phone is 100cm.
- the combined degree is 95%, which is less than the maximum value of 90% of the preset range.
- the density of the laser pattern projected by the depth camera is low.
- the mobile phone depth camera
- the depth camera requires Increasing the projection power to increase the density of the laser pattern makes the mobile phone more power-hungry. Therefore, the mobile phone sends a prompt message for the user to reduce the current distance between the user and the mobile phone (depth camera).
- the processor 40 is also used to achieve Step: When the current coincidence degree is less than the minimum value of the preset range, issue a prompt message for the target object to increase its depth; and / or, when the current coincidence degree is greater than the maximum value of the preset range, issue a reminder message for the target The object reduces its depth.
- the image acquisition method, the image acquisition device 10, the image acquisition device 100, and the computer device 1000 determine the overlapping area of the field of view area of the depth image and the field of view area of the visible light image according to the current depth of the target object.
- the current coincidence degree of the field of view of the visible light image to determine whether the current coincidence degree is within a preset range, and when the current coincidence degree exceeds the preset range, a prompt message is issued for the target object to adjust its depth, that is, to increase the target object's Depth and decrease the depth of the target object, so that the distance between the target object and the image acquisition device 100 is appropriate, that is, the distance between the target object and the image acquisition device 100 is not too close, so that the image acquisition device 100 can acquire complete depth data, and the distance between the target object and the image acquisition device 100 is not too far, so that the image acquisition device 100 can acquire more accurate depth data even when the power is low.
- step 014 includes the following sub-steps:
- 0141 Obtain a depth image view based on the current depth, the field of view range of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L (see FIG. 8) between the visible light camera 30 and the infrared light camera 24. Overlapping and non-overlapping areas of the field area and the field of view area of the visible light image; and
- 0142 Calculate the ratio of the overlapping area of the field of view of the visible light image to the entire field of view of the visible light image to obtain the current degree of coincidence.
- the fourth obtaining module 14 includes an obtaining unit 141 and a calculation unit 142.
- the obtaining unit 141 is configured to obtain the field of view area and the visible light image according to the field of view range of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L between the visible light camera 30 and the infrared light camera 24 Coincidence and non-overlap regions of the field of view.
- the calculation unit 142 is configured to calculate a ratio of a coincidence area of a field of view area of the visible light image to a field of view area of the entire visible light image to obtain a current coincidence degree.
- the processor 40 may be further configured to: according to the current depth, the field of view of the visible light camera 30, the field of view of the infrared light camera 24, and a preset distance L between the visible light camera 30 and the infrared light camera 24 Obtain the overlapping and non-overlapping areas of the viewing area of the depth image and the viewing area of the visible light image; and calculate the ratio of the overlapping area of the viewing area of the visible light image to the viewing area of the entire visible light image to obtain the current coincidence degree. That is, steps 0141 to 0142 can be implemented by the processor 40.
- the field angle includes a horizontal field angle ⁇ and a vertical field angle ⁇ , and the horizontal field angle ⁇ and the vertical field angle ⁇ determine a field of view range.
- the infrared light camera 24 and the visible light camera 30 have the same vertical field angle ⁇ and different horizontal field angle ⁇ .
- the principle is similar when the horizontal field angle ⁇ of the infrared camera 24 and the visible camera 30 is the same, the vertical field angle ⁇ is different, and the horizontal field angle and the vertical field angle ⁇ of the infrared camera 24 and the visible camera 30 are different. I will not repeat them here.
- the field of view of the visible light camera 30, the field of view of the infrared light camera 24, and the preset distance between the visible light camera 30 and the infrared light camera 24 L determines the size of the overlapping and non-overlapping areas of the field of view of the depth image and the field of view of the visible light image and the current depth of the target object, the field of view of the visible light camera 30, and the field of view of the infrared light camera 24.
- the preset distance L between the visible light camera 30 and the infrared light camera 24 is a corresponding relationship between the preset distance L between the visible light camera 30 and the infrared light camera 24.
- the field of view range of the visible light camera 30 and the field of view of the infrared light camera 24 do not change.
- the preset distance L between the visible light camera 30 and the infrared light camera 24 is getting larger and larger, the coincidence area of the field of view area of the depth image and the view area of the visible light image is decreasing, and the non-overlapping area of the field of view of the two is increasing;
- the field of view of the visible light camera 30 is unchanged, and the visible light camera 30 and the infrared light camera 24 are constant.
- the preset distance L is constant, as the field of view of the infrared camera 24 increases, the overlapping area of the field of view of the depth image and the field of view of the visible light image increases.
- the overlapping area is decreasing; for another example, when the current depth is fixed, the field of view of the infrared camera 24 is unchanged, and the preset distance L between the visible camera 30 and the infrared camera 24 is not changed, the As the field range increases, the coincidence area of the field of view area of the depth image and the view area of the visible light image increases, while the non-overlap area of the field area of the two decreases.
- the preset parameters of the visible light camera 30 and the infrared light camera 24 at the factory the overlapping area of the field of view of the depth image and the field of view of the visible light image, and the field of view of the visible light image may be determined.
- the non-overlapping area of the region and the non-overlapping area of the field of view of the depth image have a simple algorithm, thereby quickly determining the size of the overlapping area and the size of the non-overlapping area in the field of view of the visible light image. Then, the current coincidence degree is calculated according to the above formula.
- the current coincidence degree increases as the depth of the target object 200 increases; and / or, at the target
- the current coincidence degree decreases as the preset distance between the visible light camera 30 and the infrared light camera 24 increases.
- the current coincidence degree at the depth of h1 is smaller than the current coincidence degree at the depth of h2.
- the preset range can be customized.
- a user when a user acquires a three-dimensional image (for example, shooting a building, etc.) through the image acquisition device 100, in order to obtain a three-dimensional image with a larger field of view, the user can manually set the preset range to 100%.
- the field of view of the infrared camera 24 can completely cover the field of view of the visible camera 30.
- Depth information can be obtained in all areas of the visible light image, so that the synthesized three-dimensional image contains the entire visible light image.
- Scenes In the case of face recognition, since the face only accounts for a small part of the shooting scene, it is not necessary to set the preset range to 100%, but to [80%, 90%] to capture the entire face into the composition Three-dimensional image. In this way, the user can customize the preset range to meet the different shooting needs of the user.
- an embodiment of the present application further provides a computer-readable storage medium 300.
- the computer-readable storage medium 300 can be applied to the image acquisition device 100.
- the one or more non-volatile computer-readable storage media 300 include computer-executable instructions 302. When the computer-executable instructions 302 are executed by one or more processors 40, the processors 40 execute the images of any of the foregoing embodiments.
- An acquisition method for example, execute step 011: acquire a depth image of the current scene; 012: acquire a visible light image of the current scene; 013: acquire the current depth of the target object in the target scene according to the depth image and the visible light image; 014: acquire the current depth at the current depth
- the coincidence area between the field of view area of the depth image and the field of view of the visible light image accounts for the current degree of coincidence of the field of view area of the entire visible light image; 015: judging whether the current degree of coincidence is within a preset range; and, 016: at the current When the coincidence exceeds the preset range, a prompt message is issued for the target object to adjust its depth.
- the non-volatile computer-readable storage medium 300 determines a current coincidence degree between a field of view area of a depth image and a field of view area of a visible light image according to a current depth of a target object, and a current degree of overlap of the field of view area of the entire visible light image.
- a prompt message is issued for the target object to adjust its depth, that is, to increase the depth of the target object and decrease the depth of the target object, so
- the distance between the target object and the image acquisition device 100 is appropriate, that is, the distance between the target object and the image acquisition device 100 is not too close, so that the image acquisition device 100 can obtain complete depth data.
- the distance to the image acquisition device 100 is not too far, so that the image acquisition device 100 can acquire more accurate depth data even when the power is low.
- an embodiment of the present application provides a computer device 1000.
- the computer device 1000 includes a structured light projector 22, an infrared light camera 24, a visible light camera 30, a processor 40, an infrared fill light 70, a display screen 80, a speaker 90, and a memory 110.
- the processor 40 includes a microprocessor 42 and an application processor 44.
- the visible light image of the target object can be collected by the visible light camera 30, and the visible light camera 30 can be connected to the application processor 44 through the integrated circuit bus 60 and the mobile industry processor interface 32.
- the application processor 44 may be used to enable the visible light camera 30, turn off the visible light camera 30, or reset the visible light camera 30.
- the visible light camera 30 can be used to collect color images.
- the application processor 44 obtains a color image from the visible light camera 30 through the mobile industry processor interface 32, and stores the color image in the untrusted execution environment 444.
- the infrared image of the target object can be collected by the infrared light camera 24.
- the infrared light camera 24 can be connected to the application processor 44.
- the application processor 44 can be used to control the power of the infrared light camera 24 to be turned on and off (pwdn). Reset the infrared light camera 24; at the same time, the infrared light camera 24 can also be connected to the microprocessor 42, and the microprocessor 42 and the infrared light camera 24 can be connected through an integrated circuit (Inter-Integrated Circuit (I2C) bus 60).
- I2C Inter-Integrated Circuit
- the microprocessor 42 can provide the infrared light camera 24 with clock information for collecting infrared images, and the infrared images collected by the infrared light camera 24 can be transmitted to the microprocessor 42 through the Mobile Industry Processor Interface (MIPI) 422.
- the infrared fill light 70 can be used to emit infrared light outward. The infrared light is reflected by the user and received by the infrared light camera 24.
- the infrared fill light 70 and the application processor 44 can be connected through the integrated circuit bus 60.
- the application processor 44 can be used for The infrared fill light 70 is enabled.
- the infrared fill light 70 can also be connected to the microprocessor 42. Specifically, the infrared fill light 70 can be connected to the Pulse Width Modulation Interface (PWM) 424 of the microprocessor 42 on.
- PWM Pulse Width Modulation Interface
- the structured light projector 22 can project a laser light onto a target object.
- the structured light projector 22 may be connected to the application processor 44.
- the application processor 44 may be used to enable the structured light projector 22 and connected through the integrated circuit bus 60.
- the structured light projector 22 may also be connected to the microprocessor 42, specifically, The structured light projector 22 may be connected to the pulse width modulation interface 424 of the microprocessor 42.
- the microprocessor 42 may be a processing chip.
- the microprocessor 42 is connected to the application processor 44.
- the application processor 44 may be used to reset the microprocessor 42, wake the microprocessor 42, and debug Microprocessor 42 and the like.
- the microprocessor 42 may be connected to the application processor 44 through the mobile industry processor interface 422.
- the microprocessor 42 is connected to the trusted execution environment 442 of the application processor 44 through the mobile industry processor interface 422. Connected to directly transfer the data in the microprocessor 42 to the trusted execution environment 442 for storage.
- the code and memory area in the trusted execution environment 442 are controlled by the access control unit and cannot be accessed by programs in the Untrusted Execution Environment (REE) 444.
- the trusted execution environment 442 and non- Each of the trusted execution environments 444 may be formed in the application processor 44.
- the microprocessor 42 can obtain an infrared image by receiving the infrared image collected by the infrared light camera 24, and the microprocessor 42 can transmit the infrared image to the trusted execution environment 442 through the mobile industry processor interface 422.
- the infrared image output in the middle will not enter the untrusted execution environment 444 of the application processor 44, so that the infrared image will not be acquired by other programs, which improves the information security of the computer device 1000.
- the infrared image stored in the trusted execution environment 442 can be used as an infrared template.
- the microprocessor 42 controls the structured light projector 22 to project a laser on the target object, it can also control the infrared light camera 24 to collect the laser pattern modulated by the target object.
- the microprocessor 42 then obtains the laser pattern through the mobile industry processor interface 422 .
- the microprocessor 42 processes the laser pattern to obtain a depth image.
- the microprocessor 42 may store calibration information of the laser light projected by the structured light projector 22, and the microprocessor 42 obtains the laser pattern and the calibration information by processing the laser pattern and the calibration information. Depth at different locations of the target object and form a depth image. After obtaining the depth image, it is transmitted to the trusted execution environment 442 through the mobile industry processor interface 422.
- the depth image stored in the trusted execution environment 442 can be used as a depth template.
- the obtained infrared template and depth template are stored in the trusted execution environment 442.
- the verification template in the trusted execution environment 442 is not easy to be tampered with and misappropriated.
- the information in the computer device 1000 is more secure. high.
- the microprocessor 42 and the application processor 44 may be two independent monolithic structures; in another example, the microprocessor 42 and the application processor 44 may be integrated into a single monolithic structure.
- a processor 40 is formed.
- the display screen 80 may be a liquid crystal display (Liquid Crystal Display, LCD), or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display.
- the display screen 80 may be used to send a graphic message.
- the graphic prompt information is stored in the computer device 1000.
- the prompt information is only text, for example, the text is "The user is currently too close to the computer device 1000. Please increase the distance between the computer device 1000 and the user.” Or "The user is currently too far from the computer device 1000, please Reduce the distance between the computer device 1000 and the user.
- the display 80 displays a box or circle corresponding to a preset range. For example, if the preset range is [80%, 90%], then The box or circle occupies 85% of the entire display 80 and displays the text "Please change the distance between the computer device 1000 and the user until the face of the person remains within the box or circle.”
- the speaker 90 may be provided on the computer device 1000, or may be a peripheral device connected to the computer device 1000, such as a speaker. When the current coincidence degree exceeds a preset range, the speaker 90 may be used to issue voice prompt information.
- the voice prompt information is stored in the computer device 1000.
- the voice prompt information may be "the user is currently too close to the computer device 1000, please increase the distance between the computer device 1000 and the user.” Or "the user is currently too far from the computer device 1000, please reduce the computer device The distance between 1000 and the user. "In this embodiment, the prompt information may be only graphic prompt information, or only voice prompt information, and may also be prompt information including both text and voice.
- the processor 40 in FIG. 10 may be used to implement the image acquisition method of any of the foregoing embodiments.
- the processor 40 may be configured to perform the following steps: 011: acquire a depth image of the current scene; 012: acquire a visible light image of the current scene; 013: Obtain the current depth of the target object in the target scene according to the depth image and the visible light image; 014: Obtain the coincident area between the field of view area of the depth image and the field of view of the visible light image at the current depth, which occupies the entire visible light image.
- the current coincidence degree of the field area 015: determine whether the current coincidence degree is within the preset range; and 016: when the current coincidence degree exceeds the preset range, a prompt message is issued for the target object to adjust its depth.
- 015 determine whether the current coincidence degree is within the preset range; and 016: when the current coincidence degree exceeds the preset range, a prompt message is issued for the target object to adjust its depth.
- 0141 According to the current depth, the field of view of the visible light camera 30, the field of view of the infrared light camera 24, and the distance between the visible light camera 30 and the infrared light camera 24 The preset distance is used to obtain the overlapping and non-overlapping areas of the field of view of the depth image and the visible light image; and, 0142: Calculate the ratio of the overlapping area of the field of view of the visible light image to the field of view of the entire visible light image to obtain the current coincidence degree.
- the memory 110 is connected to both the microprocessor 42 and the application processor 44.
- the memory 110 stores computer-readable instructions 111.
- the processor 40 executes the depth acquisition method according to any one of the foregoing embodiments.
- the microprocessor 42 may be used to execute the method in step 011
- the application processor 44 may be used to execute the method in steps 011, 012, 013, 014, 015, 016, 0141, and 0142.
- the microprocessor 42 may be used to execute the method in steps 011, 012, 013, 014, 015, 016, 0141, 0142.
- the microprocessor 42 may be used to execute the method of at least one of steps 011, 012, 013, 014, 015, 016, 0141, 0142, and the application processor 44 may be used to execute steps 011, 012, 013, 014, 015, 016. , 0141, 0142 for the remaining steps.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Electromagnetism (AREA)
- Studio Devices (AREA)
- Measurement Of Optical Distance (AREA)
- Image Processing (AREA)
Abstract
一种图像获取方法、图像采集装置 (100)、非易失性计算机可读存储介质 (200)和计算机设备 (1000)。图像获取方法包括:(011)获取当前场景的深度图像;(012)获取当前场景的可见光图像;(013)根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;(014)获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;(015)判断当前重合度是否在预设范围内;及 (016)在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
Description
优先权信息
本申请请求2018年6月6日向中国国家知识产权局提交的、专利申请号为201810574253.8的专利申请的优先权和权益,并且通过参照将其全文并入此处。
本申请涉及图像处理领域,更具体而言,涉及一种图像获取方法、图像获取装置、图像采集装置、非易失性计算机可读存储介质和计算机设备。
目前,用于生成三维图像的图像采集装置一般包括可见光摄像头和红外光(Infrared Radiation,IR)摄像头,利用可见光摄像头获取可见光图像,利用红外光摄像头获取深度图像,然后合成可见光图像和深度图像即可得到三维图像。
发明内容
本申请实施方式提供一种图像获取方法、图像获取装置、图像采集装置、非易失性计算机可读存储介质和计算机设备。
本申请实施方式的图像获取方法包括:
获取当前场景的深度图像;
获取当前场景的可见光图像;
根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;
获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;
判断所述当前重合度是否在预设范围内;及
在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
本申请实施方式的图像获取装置包括第一获取模块、第二获取模块、第三获取模块、第四获取模块、判断模块及提示模块。第一获取模块用于获取当前场景的深度图像;第二获取模块用于获取当前场景的可见光图像;第三获取模块用于根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;第四获取模块用 于获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断模块用于判断所述当前重合度是否在预设范围内;提示模块用于在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
本申请实施方式的图像采集装置包括深度摄像模组、可见光摄像头及处理器。深度摄像模组用于获取当前场景的深度图像;可见光摄像头用于获取当前场景的可见光图像;处理器用于:根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
本申请实施方式的一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行以下图像获取步骤:获取当前场景的深度图像;获取当前场景的可见光图像;根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
本申请实施方式的计算机设备包括存储器及处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述处理器执行时,使得所述处理器执行以下图像获取步骤:获取当前场景的深度图像;获取当前场景的可见光图像;根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
本申请的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实施方式的实践了解到。
本申请的上述和/或附加的方面和优点可以从结合下面附图对实施方式的描述中 将变得明显和容易理解,其中:
图1是本申请某些实施方式的图像获取方法的流程示意图。
图2是本申请某些实施方式的图像获取装置的模块示意图。
图3是本申请某些实施方式的图像采集装置的结构示意图。
图4是本申请某些实施方式的图像采集装置的原理示意图。
图5是本申请某些实施方式的计算机设备的结构示意图。
图6是本申请某些实施方式的图像获取方法的流程示意图。
图7是本申请某些实施方式的图像获取装置的模块示意图。
图8是本申请某些实施方式的图像获取方法的原理示意图。
图9是本申请实施方式的计算机可读存储介质和处理器的模块示意图。
图10是本申请实施方式的计算机设备的模块示意图。
以下结合附图对本申请的实施方式作进一步说明。附图中相同或类似的标号自始至终表示相同或类似的元件或具有相同或类似功能的元件。另外,下面结合附图描述的本申请的实施方式是示例性的,仅用于解释本申请的实施方式,而不能理解为对本申请的限制。
请参阅图1,本申请实施方式的图像获取方法包括:
011:获取当前场景的深度图像;
012:获取当前场景的可见光图像;
013:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;
014:获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;
015:判断当前重合度是否在预设范围内;及
016:在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请参阅图3和图6,在某些实施方式中,图像获取方法应用于图像采集装置100,图像采集装置100包括可见光摄像头30与红外光摄像头24,步骤014包括:
0141:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L(如图8)来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域;及
0142:计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请参阅图4,在某些实施方式中,在预设距离相同时,当前重合度随着目标物体的深度的增加而增加;或/和在当前深度相同时,当前重合度随着预设距离的增加而减小。
在某些实施方式中,图像获取方法还包括:在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。
请参阅图2,本申请实施方式的图像获取装置10包括第一获取模块11、第二获取模块12、第三获取模块13、第四获取模块14、判断模块15及提示模块16。第一获取模块11用于获取当前场景的深度图像。第二获取模块12用于获取当前场景的可见光图像。第三获取模块13用于根据深度图像和可见光图像获取目标场景中的目标物体的当前深度。第四获取模块14用于获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度。判断模块15用于判断当前重合度是否在预设范围内。提示模块16用于在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请参阅图3和图7,在某些实施方式中,图像获取装置10应用于图像采集装置100,图像采集装置100包括可见光摄像头30与红外光摄像头24,第四获取模块14包括获取单元141和计算单元142。获取单元141用于根据可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域。计算单元142用于计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请参阅图4,在某些实施方式中,在预设距离相同时,当前重合度随着目标物体的深度的增加而增加;或/和在当前深度相同时,当前重合度随着预设距离的增加而减小。
请参阅图2,在某些实施方式中,提示模块16还用于在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或,在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。
请参阅图3,本申请实施方式的图像采集装置100包括深度摄像模组20、可见光摄像头30及处理器40。深度摄像模组20用于获取当前场景的深度图像。可见光摄像头30用于获取当前场景的可见光图像。处理器40用于:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;判 断当前重合度是否在预设范围内;在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请参阅图3,在某些实施方式中,深度摄像模组20包括红外光摄像头24,处理器40还可用于:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域;及,计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请参阅图4,在某些实施方式中,在预设距离相同时,当前重合度随着目标物体的深度的增加而增加;或/和在当前深度相同时,当前重合度随着预设距离的增加而减小。
请参阅图3,在某些实施方式中,处理器40还用于:在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。
请参阅图1和图9,本申请实施方式的一个或多个包含计算机可执行指令302的非易失性计算机可读存储介质300,当计算机可执行指令302被一个或多个处理器40执行时,使得处理器40执行以下图像获取步骤:
011:获取当前场景的深度图像;
012:获取当前场景的可见光图像;
013:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;
014:获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;
015:判断当前重合度是否在预设范围内;及
016:在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请参阅图3和图9,在某些实施方式中,计算机可读存储介质300应用于图像采集装置100,图像采集装置100包括可见光摄像头30与红外光摄像头24,当计算机可执行指令302被一个或多个处理器40执行时,使得处理器40还执行以下步骤:
0141:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L(如图8)来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域;及
0142:计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请参阅图4,在某些实施方式中,在预设距离相同时,当前重合度随着目标物体 的深度的增加而增加;或/和在当前深度相同时,当前重合度随着预设距离的增加而减小。
请参阅图9,在某些实施方式中,当计算机可执行指令302被一个或多个处理器40执行时,使得处理器40还执行以下步骤:在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。
请参阅图10,本申请实施方式的计算机设备1000包括存储器110及处理器40,存储器110中储存有计算机可读指令111,计算机可读指令111被处理器40执行时,使得处理器40执行以下图像获取步骤:
011:获取当前场景的深度图像;
012:获取当前场景的可见光图像;
013:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;
014:获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;
015:判断当前重合度是否在预设范围内;及
016:在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请参阅图10,在某些实施方式中,计算机设1000包括可见光摄像头30与红外光摄像头24,计算机可读指令111被处理器40执行时,使得处理器40还执行以下步骤:
0141:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L(如图8)来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域;及
0142:计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请参阅图4,在某些实施方式中,在预设距离相同时,当前重合度随着目标物体的深度的增加而增加;或/和在当前深度相同时,当前重合度随着预设距离的增加而减小。
请参阅图10,在某些实施方式中,计算机可读指令111被处理器40执行时,使得处理器40还执行以下步骤:在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。
请参阅图1,本申请提供一种图像获取方法,图像获取方法包括:
011:获取当前场景的深度图像;
012:获取当前场景的可见光图像;
013:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;
014:获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;
015:判断当前重合度是否在预设范围内;及
016:在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请结合图2,本申请还提供一种图像获取装置10。图像获取装置10包括第一获取模块11、第二获取模块12、第三获取模块13、第四获取模块14、判断模块15及提示模块16。第一获取模块11用于获取当前场景的深度图像。第二获取模块12用于获取当前场景的可见光图像。第三获取模块13用于根据深度图像和可见光图像获取目标场景中的目标物体的当前深度。第四获取模块14用于获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度。判断模块15用于判断当前重合度是否在预设范围内。提示模块16用于在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
请结合图3,本申请还提供一种图像采集装置100。图像采集装置100包括深度摄像模组20、可见光摄像头30及处理器40。深度摄像模组20用于获取当前场景的深度图像。可见光摄像头30用于获取当前场景的可见光图像。处理器40用于:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;判断当前重合度是否在预设范围内;在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。也即是说,步骤011可以由深度摄像模组20实现,步骤012可以由可见光摄像头30实现,步骤013至步骤016可以由处理器40实现。
其中,图像采集装置100可以为前置的图像采集装置100,也可为后置的图像采集装置100。
具体地,本实施方式中,深度摄像模组20为结构光摄像模组,包括结构光投射器22和红外光摄像头24。结构光投射器22向目标场景中投射红外光图案。红外光摄像头24采集被目标物体200(图4所示)调制后的红外光图案。处理器40采用图像匹配算法计算出该红外光图案的深度图像。图像采集装置100包括深度摄像模组20时,图像采集装置100同时还包括可见光摄像头30,可见光摄像头30用于获取目标场景的可见光图像,该可见光图像包含目标场景中各物体的色彩信息。
或者,在其他实施方式中,深度摄像模组20也可为TOF传感器模组。TOF传感 器模组包括激光投射器22和红外光摄像头24。激光投射器22向目标场景投射均匀光线,红外光摄像头24接收反射回的光线并记录发射光线的时间点和接收光线的时间点,处理器40根据发射光线的时间点与接收光线的时间点之间的时间差和光速计算目标场景中物体对应的深度像素值并合并多个深度像素值得到深度图像。图像采集装置100包括TOF传感器模组时,图像采集装置100同时还包括可见光摄像头30,可见光摄像头30用于获取目标场景的可见光图像,该可见光图像包含目标场景中各物体的彩色信息。
请参阅图4,当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域也是当前深度下红外光摄像头24的视场区域与可见光摄像头30的视场区域相重叠的区域。非重合区域包括可见光图像的视场区域的非重合区域及深度图像的视场区域的非重合区域,可见光图像的视场区域的非重合区域中仅仅具有可见光摄像头30能够拍摄到的场景,而不具有红外光摄像头24能拍摄到的场景;深度图像的视场区域的非重合区域中仅仅具有红外光摄像头24能够拍摄到的场景,而不具有可见光摄像头30能够拍摄到的场景。当前重合度指的是当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的比值。例如:在深度h1下,当前重合度W1为深度图像的视场区域S1与可见光图像的视场区域R1之间的重合区域C1与可见光图像的视场区域R1的比值,即W1=C1/R1;而在深度h2下,当前重合度W2为深度图像的视场区域S2与可见光图像的视场区域R2之间的重合区域C2与可见光图像的视场区域R2的比值,即W2=C2/R2。
请参阅图5,本申请实施方式的图像采集装置100可以应用于本申请实施方式的计算机设备1000中,也即是说,本申请实施方式的计算机设备1000可以包括本申请实施方式的图像采集装置100。图像获取装置10(图2所示)可以设置在计算机设备1000中。计算机设备1000包括手机、平板电脑、笔记本电脑、智能手环、智能手表、智能头盔、智能眼镜等。本申请实施方式中,以计算机设备1000为手机为例进行说明,可以理解,计算机设备1000的具体形式并不限于手机。
当图像采集装置100为前置的图像采集装置100时,本申请的图像获取方法可以应用于自拍、人脸解锁、人脸加密、人脸支付等人脸识别的应用场景中,目标物体为用户的人脸。用户在使用深度相机采集人脸时,比如自拍、人脸识别,由于可见光摄像头30和红外((Infrared Radiation,IR)摄像头24安装位置存在一定距离,导致可见光摄像头30的视场范围和红外光摄像头24的视场范围存在不重合的部分,特别是用户与深度相机距离过近时,会导致人脸超出红外摄像头24的视场范围与可见光摄像头30的视场范围的重合区域,从而无法获取完整的人脸深度数据。在一个例子中, 在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或,在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。此时,提示模块16还用于在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或,在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。例如,预设范围为[80%,90%],在预设范围内,例如人脸与深度相机(图像获取装置10、图像采集装置100、或计算机设备1000)之间的深度为40cm,当前重合度为85%,手机(深度相机)能够获得较完整且较精确的深度数据,表示当前人脸与手机(深度相机)之间的距离适当,手机不用发出提示信息,用户也不用做深度调整。在当前重合度小于80%时,表示当前人脸与手机(深度相机)之间的距离过近,例如人脸与手机(深度相机)之间的深度为20cm,当前重合度为65%,小于预设范围的最小值80%,深度相机只能覆盖部分人脸,在此当前距离下手机(深度相机)只能采集部分人脸的深度数据。因此,手机发出提示信息让用户增大用户与手机之间的当前距离。在当前重合度大于90%时,表示当前人脸与手机之间的距离过远,例如人脸与手机(深度相机)之间的深度为100cm,当前重合度为95%,小于预设范围的最大值90%,深度相机投射的激光图案密度较低,在此当前距离下,虽然手机(深度相机)能够采集完整的人脸深度数据,但是深度相机需增大投射功率以提高激光图案的密度,使得手机更加耗电。因此,手机发出提示信息让用户减小用户与手机(深度相机)之间的当前距离。因此,处理器40还用于实现以下步骤:在当前重合度小于预设范围的最小值时,发出提示信息以供目标物体增大其深度;和/或,在当前重合度大于预设范围的最大值时,发出提示信息以供目标物体减小其深度。
综上,本申请实施方式的图像获取方法、图像获取装置10、图像采集装置100和计算机设备1000根据目标物体的当前深度确定深度图像的视场区域与可见光图像的视场区域的重合区域占整个可见光图像的视场区域的当前重合度,判断当前重合度是否在预设范围内,并在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度,即增大目标物体的深度和减小目标物体的深度,如此,目标物体与图像采集装置100之间的距离合适,也即是说,目标物体与图像采集装置100之间的距离不会过近,以使图像采集装置100能够获取完整的深度数据,目标物体与图像采集装置100之间的距离也不会过远,以使图像采集装置100在功率较低的情况下也能够获取较为精确的深度数据。
请参阅图6,在某些实施方式中,步骤014包括以下子步骤:
0141:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L(如图8)来获取深度 图像的视场区域与可见光图像的视场区域的重合区域及非重合区域;及
0142:计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请结合参阅图7,在某些实施方式中,第四获取模块14包括获取单元141和计算单元142。获取单元141用于根据可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域。计算单元142用于计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
请结合图5,处理器40还可用于:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L来获取深度图像的视场区域与可见光图像的视场区域的重合区域及非重合区域;及,计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。也即是说,步骤0141至步骤0142可以由处理器40实现。
请参阅图8,具体地,视场角包括水平视场角α和垂直视场角β,水平视场角α和垂直视场角β确定了视场范围。本申请实施方式以红外光摄像头24和可见光摄像头30的垂直视场角β相同而水平视场角α不同进行说明。在红外光摄像头24和可见光摄像头30的水平视场角α相同,垂直视场角β不同、以及红外光摄像头24和可见光摄像头30的水平视场角和垂直视场角β均不同时原理类似,在此不再赘述。
请结合参阅图5,图像采集装置100或计算机设备1000在出厂时,可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L就确定了,深度图像的视场区域和可见光图像的视场区域的重合区域及非重合区域的大小与目标物体的当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离L存在对应关系,例如,在当前深度固定,可见光摄像头30的视场范围、红外光摄像头24的视场范围不变时,随着可见光摄像头30与红外光摄像头24之间的预设距离L越来越大,深度图像的视场区域和可见光图像的视场区域的重合区域递减,而二者视场区域的非重合区域递增;又例如,在当前深度固定,可见光摄像头30的视场范围不变、可见光摄像头30与红外光摄像头24之间的预设距离L不变时,随着红外光摄像头24的视场范围的增大,深度图像的视场区域和可见光图像的视场区域的重合区域递增,而二者视场区域的非重合区域递减;再例如,在当前深度固定,红外光摄像头24的视场范围不变、和可见光摄像头30与红外光摄像头24之间的预设距离L不 变时,随着可见光摄像头30的视场范围的增大,深度图像的视场区域和可见光图像的视场区域的重合区域递增,而二者视场区域的非重合区域递减。
如此,可以根据目标物体200的当前深度、出厂时的可见光摄像头30和红外光摄像头24的预设参数来确定深度图像的视场区域和可见光图像的视场区域的重合区域、可见光图像的视场区域的非重合区域、及深度图像的视场区域的非重合区域,算法简单,从而快速确定可见光图像的视场区域的重合区域的大小和非重合区域的大小。然后,根据上述的公式计算当前重合度。
请继续参阅图4,在某些实施方式中,在可见光摄像头30和红外光摄像头24的预设距离相同时,当前重合度随着目标物体200的深度的增加而增加;和/或,在目标物体200的当前深度相同时,当前重合度随着可见光摄像头30和红外光摄像头24的预设距离增加而减小。例如,h1深度处的当前重合度小于h2深度处的当前重合度。
在某些实施方式中,预设范围能够自定义设置。
具体地,请参阅图3,用户在通过图像采集装置100获取三维图像时(例如拍摄建筑物等),为了获取到更大视场范围的三维图像,这时用户可手动将预设范围设置到100%,此时,红外光摄像头24的视场范围可完全覆盖可见光摄像头30的视场范围,可见光图像的所有区域均可以获取到深度信息,从而使得合成出的三维图像包含了整个可见光图像的场景。而在人脸识别时,由于人脸仅占拍摄场景的一小部分,无需将预设范围设置到100%,而是设置到[80%,90%]即可将整个人脸都拍摄到合成的三维图像中。如此,用户可自定义设置预设范围,可满足用户不同的拍摄需求。
请参阅图9,本申请实施方式还提供了一种计算机可读存储介质300,计算机可读存储介质300可应用于图像采集装置100中。一个或多个非易失性计算机可读存储介质300包含计算机可执行指令302,当计算机可执行指令302被一个或多个处理器40执行时,使得处理器40执行上述任一实施方式的图像获取方法,例如执行步骤011:获取当前场景的深度图像;012:获取当前场景的可见光图像;013:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;014:获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;015:判断当前重合度是否在预设范围内;及,016:在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。
本申请实施方式的非易失性计算机可读存储介质300根据目标物体的当前深度确定深度图像的视场区域与可见光图像的视场区域的重合区域占整个可见光图像的视场区域的当前重合度,判断当前重合度是否在预设范围内,并在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度,即增大目标物体的深度和减小目标 物体的深度,如此,目标物体与图像采集装置100之间的距离合适,也即是说,目标物体与图像采集装置100之间的距离不会过近,以使图像采集装置100能够获取完整的深度数据,目标物体与图像采集装置100之间的距离也不会过远,以使图像采集装置100在功率较低的情况下也能够获取较为精确的深度数据。
请参阅图10,本申请实施方式提供一种计算机设备1000。计算机设备1000包括结构光投射器22、红外光摄像头24、可见光摄像头30、处理器40、红外补光灯70、显示屏80、扬声器90及存储器110。其中,处理器40包括微处理器42及应用处理器44。
目标物体的可见光图像可以由可见光摄像头30采集,可见光摄像头30可通过集成电路总线60、移动产业处理器接口32与应用处理器44连接。应用处理器44可用于使能可见光摄像头30、关闭可见光摄像头30或重置可见光摄像头30。可见光摄像头30可用于采集彩色图像,应用处理器44通过移动产业处理器接口32从可见光摄像头30中获取彩色图像,并将该彩色图像存入非可信执行环境444中。
目标物体的红外图像可以由红外光摄像头24采集,红外光摄像头24可以与应用处理器44连接,应用处理器44可用于控制红外光摄像头24的电源启闭、关闭(pwdn)红外光摄像头24或重置(reset)红外光摄像头24;同时,红外光摄像头24还可以与微处理器42连接,微处理器42与红外光摄像头24可以通过集成电路(Inter-Integrated Circuit,I2C)总线60连接,微处理器42可以给红外光摄像头24提供采集红外图像的时钟信息,红外光摄像头24采集的红外图像可以通过移动产业处理器接口(Mobile Industry Processor Interface,MIPI)422传输到微处理器42中。红外补光灯70可用于向外发射红外光,红外光被用户反射后被红外光摄像头24接收,红外补光灯70与应用处理器44可以通过集成电路总线60连接,应用处理器44可用于使能红外补光灯70,红外补光灯70还可以与微处理器42连接,具体地,红外补光灯70可以连接在微处理器42的脉冲宽度调制接口(Pulse Width Modulation,PWM)424上。
结构光投射器22可向目标物体投射激光。结构光投射器22可以与应用处理器44连接,应用处理器44可用于使能结构光投射器22并通过集成电路总线60连接;结构光投射器22还可以与微处理器42连接,具体地,结构光投射器22可以连接在微处理器42的脉冲宽度调制接口424上。
微处理器42可以是处理芯片,微处理器42与应用处理器44连接,具体地,应用处理器44可用于重置微处理器42、唤醒(wake)微处理器42、纠错(debug)微处理器42等,微处理器42可通过移动产业处理器接口422与应用处理器44连接,具体地,微处理器42通过移动产业处理器接口422与应用处理器44的可信执行环境 442连接,以将微处理器42中的数据直接传输到可信执行环境442中存储。其中,可信执行环境442中的代码和内存区域都是受访问控制单元控制的,不能被非可信执行环境(Rich Execution Environment,REE)444中的程序所访问,可信执行环境442和非可信执行环境444均可以形成在应用处理器44中。
微处理器42可以通过接收红外光摄像头24采集的红外图像以获取红外图像,微处理器42可将该红外图像通过移动产业处理器接口422传输至可信执行环境442中,从微处理器42中输出的红外图像不会进入到应用处理器44的非可信执行环境444中,而使得该红外图像不会被其他程序获取,提高计算机设备1000的信息安全性。存储在可信执行环境442中的红外图像可作为红外模板。
微处理器42控制结构光投射器22向目标物体投射激光后,还可以控制红外光摄像头24采集由目标物体调制后的激光图案,微处理器42再通过移动产业处理器接口422获取该激光图案。微处理器42处理该激光图案以得利深度图像,具体地,微处理器42中可以存储有结构光投射器22投射的激光的标定信息,微处理器42通过处理激光图案与该标定信息得到目标物体不同位置的深度并形成深度图像。得到深度图像后,再通过移动产业处理器接口422传输至可信执行环境442中。存储在可信执行环境442中的深度图像可作为深度模板。
计算机设备1000中,将获取得到的红外模板和深度模板均存储在可信执行环境442中,在可信执行环境442中的验证模板不易被篡改和盗用,计算机设备1000内的信息的安全性较高。
在一个例子中,微处理器42与应用处理器44可以为两个彼此互相独立的单体结构;在另一个例子中,微处理器42与应用处理器44可以集成为一个单体结构,以形成一个处理器40。
显示屏80可为液晶显示屏(Liquid Crystal Display,LCD),也可以为有机发光二极管(Organic Light-Emitting Diode,OLED)显示屏。在当前重合度超出预设范围时,显示屏80可以用于发出图文提示信息。图文提示信息存储在计算机设备1000内。在一个例子中,提示信息仅为文字,例如文字为“用户当前距离计算机设备1000过近,请增大计算机设备1000与用户之间的距离。”或“用户当前距离计算机设备1000过远,请减小计算机设备1000与用户之间的距离。”在另一个例子中,显示屏80显示具有与预设范围对应的方框或圆圈,例如,预设范围为[80%,90%],则方框或圆圈占整个显示屏80的85%,并显示文字“请改变计算机设备1000与用户之间的距离,直至人脸保持在方框或圆圈范围内。”
扬声器90可以设置在计算机设备1000上,也可以为与计算机设备1000连接的 外设设备,例如音响。在当前重合度超出预设范围时,扬声器90可以用于发出语音提示信息。语音提示信息存储在计算机设备1000内。在一个例子中,语音提示信息可以为“用户当前距离计算机设备1000过近,请增大计算机设备1000与用户之间的距离。”或“用户当前距离计算机设备1000过远,请减小计算机设备1000与用户之间的距离。”在本实施例中,提示信息可以为仅为图文提示信息,也可以仅为语音提示信息,还可以为同时包括图文和语音的提示信息。
运用图10中的处理器40可实现上述任一实施方式的图像获取方法的,例如,处理器40可用于执行以下步骤:011:获取当前场景的深度图像;012:获取当前场景的可见光图像;013:根据深度图像和可见光图像获取目标场景中的目标物体的当前深度;014:获取在当前深度下深度图像的视场区域与可见光图像的视场区域之间的重合区域占整个可见光图像的视场区域的当前重合度;015:判断当前重合度是否在预设范围内;及,016:在当前重合度超出预设范围时,发出提示信息以供目标物体调整其深度。再例如,运用图10中的处理器40还可实现:0141:根据当前深度、可见光摄像头30的视场范围、红外光摄像头24的视场范围、及可见光摄像头30与红外光摄像头24之间的预设距离来获取深度图像与可见光图像的视场区域的重合区域及非重合区域;及,0142:计算可见光图像的视场区域的重合区域占整个可见光图像的视场区域的比值以得到当前重合度。
存储器110与微处理器42和应用处理器44均连接。存储器110中储存有计算机可读指令111,计算机可读指令111被处理器40执行时,处理器40执行上述任一实施方式的深度获取方法。具体地,微处理器42可用于执行步骤011中的方法,应用处理器44用于执行步骤011、012、013、014、015、016、0141、0142中的方法。或者,微处理器42可用于执行步骤011、012、013、014、015、016、0141、0142中的方法。或者,微处理器42可用于执行步骤011、012、013、014、015、016、0141、0142中至少一个的方法,应用处理器44用于执行步骤011、012、013、014、015、016、0141、0142中剩余步骤的方法。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型,本申请的范围由权利要求及其等同物限定。
Claims (20)
- 一种图像获取方法,其特征在于,所述图像获取方法包括:获取当前场景的深度图像;获取当前场景的可见光图像;根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
- 根据权利要求1所述的图像获取方法,其特征在于,所述图像获取方法应用于图像采集装置,所述图像采集装置包括可见光摄像头与红外光摄像头,获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度的步骤包括:根据所述当前深度、所述可见光摄像头的视场范围、所述红外光摄像头的视场范围、及所述可见光摄像头与所述红外光摄像头之间的预设距离来获取所述深度图像的视场区域与所述可见光图像的视场区域的重合区域及非重合区域;及计算所述可见光图像的视场区域的重合区域占整个所述可见光图像的视场区域的比值以得到所述当前重合度。
- 根据权利要求2所述的图像获取方法,其特征在于,在所述预设距离相同时,所述当前重合度随着所述目标物体的深度的增加而增加;或/和在所述当前深度相同时,所述当前重合度随着所述预设距离的增加而减小。
- 根据权利要求1所述的图像获取方法,其特征在于,所述图像获取方法还包括:在所述当前重合度小于所述预设范围的最小值时,发出提示信息以供所述目标物体增大其深度;和/或在所述当前重合度大于所述预设范围的最大值时,发出提示信息以供所述目标物体减小其深度。
- 一种图像获取装置,其特征在于,所述图像获取装置包括:第一获取模块,用于获取当前场景的深度图像;第二获取模块,用于获取当前场景的可见光图像;第三获取模块,用于根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;第四获取模块,用于获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断模块,用于判断所述当前重合度是否在预设范围内;及提示模块,用于在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
- 根据权利要求5所述的图像获取装置,其特征在于,所述图像获取装置应用于图像采集装置,所述图像采集装置包括可见光摄像头与红外光摄像头,所述第四获取模块包括:获取单元,用于根据所述当前深度、所述可见光摄像头的视场范围、所述红外光摄像头的视场范围、及所述可见光摄像头与所述红外光摄像头之间的预设距离来获取所述深度图像的视场区域与所述可见光图像的视场区域的重合区域及非重合区域;及计算单元,用于计算所述可见光图像的视场区域的重合区域占整个所述可见光图像的视场区域的比值以得到所述当前重合度。
- 根据权利要求6所述的图像获取装置,其特征在于,在所述预设距离相同时,所述当前重合度随着所述目标物体的深度的增加而增加;或/和在所述当前深度相同时,所述当前重合度随着所述预设距离的增加而减小。
- 根据权利要求5所述的图像获取装置,其特征在于,所述提示模块还用于在所述当前重合度小于所述预设范围的最小值时,发出提示信息以供所述目标物体增大其深度;和/或在所述当前重合度大于所述预设范围的最大值时,发出提示信息以供所述目标物体减小其深度。
- 一种图像采集装置,其特征在于,所述图像采集装置包括:深度摄像模组,用于获取当前场景的深度图像;可见光摄像头,用于获取当前场景的可见光图像;处理器,用于:根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
- 根据权利要求9所述的图像采集装置,其特征在于,所述深度摄像模组包括红外光摄像头,所述处理器还用于:根据所述当前深度、所述可见光摄像头的视场范围、所述红外光摄像头的视场范围、及所述可见光摄像头与所述红外光摄像头之间的预设距离来获取所述深度图像的视场区域与所述可见光图像的视场区域的重合区域及非重合区域;及计算所述可见光图像的视场区域的重合区域占整个所述可见光图像的视场区域的比值以得到所述当前重合度。
- 根据权利要求10所述的图像采集装置,其特征在于,在所述预设距离相同时,所述当前重合度随着所述目标物体的深度的增加而增加;或/和在所述当前深度相同时,所述当前重合度随着所述预设距离的增加而减小。
- 根据权利要求9所述的图像采集装置,其特征在于,所述处理器还用于:在所述当前重合度小于所述预设范围的最小值时,发出提示信息以供所述目标物体增大其深度;和/或在所述当前重合度大于所述预设范围的最大值时,发出提示信息以供所述目标物体减小其深度。
- 一个或多个包含计算机可执行指令的非易失性计算机可读存储介质,当所述计算机可执行指令被一个或多个处理器执行时,使得所述处理器执行以下图像获取步骤:获取当前场景的深度图像;获取当前场景的可见光图像;根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
- 根据权利要求13所述的非易失性计算机可读存储介质,其特征在于,所述计算机可读存储介质应用于图像采集装置,所述图像采集装置包括可见光摄像头与红外光摄像头,当所述计算机可执行指令被一个或多个所述处理器执行时,使得所述处理器还执行以下步骤:根据所述当前深度、所述可见光摄像头的视场范围、所述红外光摄像头的视场范围、及所述可见光摄像头与所述红外光摄像头之间的预设距离来获取所述深度图像的视场区域与所述可见光图像的视场区域的重合区域及非重合区域;及计算所述可见光图像的视场区域的重合区域占整个所述可见光图像的视场区域的比值以得到所述当前重合度。
- 根据权利要求14所述的非易失性计算机可读存储介质,其特征在于,在所述预设距离相同时,所述当前重合度随着所述目标物体的深度的增加而增加;或/和在所述当前深度相同时,所述当前重合度随着所述预设距离的增加而减小。
- 根据权利要求13所述的非易失性计算机可读存储介质,其特征在于,当所述计算机可执行指令被一个或多个所述处理器执行时,使得所述处理器还执行以下步骤:在所述当前重合度小于所述预设范围的最小值时,发出提示信息以供所述目标物体增大其深度;和/或在所述当前重合度大于所述预设范围的最大值时,发出提示信息以供所述目标物体减小其深度。
- 一种计算机设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行以下图像获取步骤:获取当前场景的深度图像;获取当前场景的可见光图像;根据所述深度图像和所述可见光图像获取所述目标场景中的目标物体的当前深度;获取在所述当前深度下所述深度图像的视场区域与所述可见光图像的视场区域之间的重合区域占整个所述可见光图像的视场区域的当前重合度;判断所述当前重合度是否在预设范围内;及在所述当前重合度超出所述预设范围时,发出提示信息以供所述目标物体调整其深度。
- 根据权利要求17所述的计算机设备,其特征在于,所述计算机设备包括可见光摄像头与红外光摄像头,所述计算机可读指令被所述处理器执行时,使得所述处理器还执行以下步骤:根据所述当前深度、所述可见光摄像头的视场范围、所述红外光摄像头的视场范围、及所述可见光摄像头与所述红外光摄像头之间的预设距离来获取所述深度图像的视场区域与所述可见光图像的视场区域的重合区域及非重合区域;及计算所述可见光图像的视场区域的重合区域占整个所述可见光图像的视场区域的比值以得到所述当前重合度。
- 根据权利要求18所述的计算机设备,其特征在于,在所述预设距离相同时,所述当前重合度随着所述目标物体的深度的增加而增加;或/和在所述当前深度相同时,所述当前重合度随着所述预设距离的增加而减小。
- 根据权利要求17所述的计算机设备,其特征在于,所述计算机可读指令被所述处理器执行时,使得所述处理器还执行以下步骤:在所述当前重合度小于所述预设范围的最小值时,发出提示信息以供所述目标物体增大其深度;和/或在所述当前重合度大于所述预设范围的最大值时,发出提示信息以供所述目标物体减小其深度。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19815232.4A EP3796634A4 (en) | 2018-06-06 | 2019-01-08 | IMAGE CAPTURE METHOD AND DEVICE, IMAGE RECORDING DEVICE, COMPUTER DEVICE AND READABLE STORAGE MEDIUM |
US17/104,775 US20210084280A1 (en) | 2018-06-06 | 2020-11-25 | Image-Acquisition Method and Image-Capturing Device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810574253.8A CN108769476B (zh) | 2018-06-06 | 2018-06-06 | 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 |
CN201810574253.8 | 2018-06-06 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/104,775 Continuation US20210084280A1 (en) | 2018-06-06 | 2020-11-25 | Image-Acquisition Method and Image-Capturing Device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019233106A1 true WO2019233106A1 (zh) | 2019-12-12 |
Family
ID=63999031
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/070853 WO2019233106A1 (zh) | 2018-06-06 | 2019-01-08 | 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20210084280A1 (zh) |
EP (1) | EP3796634A4 (zh) |
CN (1) | CN108769476B (zh) |
WO (1) | WO2019233106A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115661668A (zh) * | 2022-12-13 | 2023-01-31 | 山东大学 | 一种辣椒花待授粉花朵识别方法、装置、介质及设备 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108769476B (zh) * | 2018-06-06 | 2019-07-19 | Oppo广东移动通信有限公司 | 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 |
CN109241955B (zh) * | 2018-11-08 | 2022-04-19 | 联想(北京)有限公司 | 识别方法和电子设备 |
CN110415287B (zh) * | 2019-07-11 | 2021-08-13 | Oppo广东移动通信有限公司 | 深度图的滤波方法、装置、电子设备和可读存储介质 |
CN113126072B (zh) * | 2019-12-30 | 2023-12-29 | 浙江舜宇智能光学技术有限公司 | 深度相机及控制方法 |
CN111722240B (zh) * | 2020-06-29 | 2023-07-21 | 维沃移动通信有限公司 | 电子设备、对象跟踪方法及装置 |
CN115022661B (zh) * | 2022-06-02 | 2023-04-25 | 山西鑫博睿科技有限公司 | 一种视频直播环境监测分析调控方法、设备及计算机存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140152658A1 (en) * | 2012-12-05 | 2014-06-05 | Samsung Electronics Co., Ltd. | Image processing apparatus and method for generating 3d image thereof |
CN204481940U (zh) * | 2015-04-07 | 2015-07-15 | 北京市商汤科技开发有限公司 | 双目摄像头拍照移动终端 |
CN105530503A (zh) * | 2014-09-30 | 2016-04-27 | 光宝科技股份有限公司 | 深度图建立方法与多镜头相机系统 |
CN107862259A (zh) * | 2017-10-24 | 2018-03-30 | 重庆虚拟实境科技有限公司 | 人像采集方法及装置、终端装置和计算机可读存储介质 |
CN108769476A (zh) * | 2018-06-06 | 2018-11-06 | Oppo广东移动通信有限公司 | 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4812510B2 (ja) * | 2006-05-17 | 2011-11-09 | アルパイン株式会社 | 車両周辺画像生成装置および撮像装置の測光調整方法 |
US10477184B2 (en) * | 2012-04-04 | 2019-11-12 | Lifetouch Inc. | Photography system with depth and position detection |
CN103390164B (zh) * | 2012-05-10 | 2017-03-29 | 南京理工大学 | 基于深度图像的对象检测方法及其实现装置 |
EP3134850B1 (en) * | 2014-04-22 | 2023-06-14 | Snap-Aid Patents Ltd. | Method for controlling a camera based on processing an image captured by other camera |
CN105303128B (zh) * | 2014-07-31 | 2018-09-11 | 中国电信股份有限公司 | 一种防止未经授权使用移动终端的方法和移动终端 |
TWI538508B (zh) * | 2014-08-15 | 2016-06-11 | 光寶科技股份有限公司 | 一種可獲得深度資訊的影像擷取系統與對焦方法 |
KR101539038B1 (ko) * | 2014-09-02 | 2015-07-24 | 동국대학교 산학협력단 | 복수의 깊이 카메라로부터 취득한 깊이 맵의 홀 필링 방법 |
CN104346816B (zh) * | 2014-10-11 | 2017-04-19 | 京东方科技集团股份有限公司 | 一种深度确定方法、装置及电子设备 |
CN106161910B (zh) * | 2015-03-24 | 2019-12-27 | 北京智谷睿拓技术服务有限公司 | 成像控制方法和装置、成像设备 |
US9857167B2 (en) * | 2015-06-23 | 2018-01-02 | Hand Held Products, Inc. | Dual-projector three-dimensional scanner |
CN106683071B (zh) * | 2015-11-06 | 2020-10-30 | 杭州海康威视数字技术股份有限公司 | 图像的拼接方法和装置 |
CN207248115U (zh) * | 2017-08-01 | 2018-04-17 | 深圳市易尚展示股份有限公司 | 彩色三维扫描仪 |
-
2018
- 2018-06-06 CN CN201810574253.8A patent/CN108769476B/zh active Active
-
2019
- 2019-01-08 EP EP19815232.4A patent/EP3796634A4/en active Pending
- 2019-01-08 WO PCT/CN2019/070853 patent/WO2019233106A1/zh unknown
-
2020
- 2020-11-25 US US17/104,775 patent/US20210084280A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140152658A1 (en) * | 2012-12-05 | 2014-06-05 | Samsung Electronics Co., Ltd. | Image processing apparatus and method for generating 3d image thereof |
CN105530503A (zh) * | 2014-09-30 | 2016-04-27 | 光宝科技股份有限公司 | 深度图建立方法与多镜头相机系统 |
CN204481940U (zh) * | 2015-04-07 | 2015-07-15 | 北京市商汤科技开发有限公司 | 双目摄像头拍照移动终端 |
CN107862259A (zh) * | 2017-10-24 | 2018-03-30 | 重庆虚拟实境科技有限公司 | 人像采集方法及装置、终端装置和计算机可读存储介质 |
CN108769476A (zh) * | 2018-06-06 | 2018-11-06 | Oppo广东移动通信有限公司 | 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3796634A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115661668A (zh) * | 2022-12-13 | 2023-01-31 | 山东大学 | 一种辣椒花待授粉花朵识别方法、装置、介质及设备 |
Also Published As
Publication number | Publication date |
---|---|
EP3796634A1 (en) | 2021-03-24 |
CN108769476B (zh) | 2019-07-19 |
CN108769476A (zh) | 2018-11-06 |
EP3796634A4 (en) | 2021-06-23 |
US20210084280A1 (en) | 2021-03-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019233106A1 (zh) | 图像获取方法及装置、图像采集装置、计算机设备和可读存储介质 | |
TWI714131B (zh) | 控制方法、微處理器、電腦可讀記錄媒體及電腦設備 | |
KR102524498B1 (ko) | 듀얼 카메라를 포함하는 전자 장치 및 듀얼 카메라의 제어 방법 | |
US11567333B2 (en) | Head-mounted display, head-mounted display linking system, and method for same | |
US11335028B2 (en) | Control method based on facial image, related control device, terminal and computer device | |
TWI725431B (zh) | 影像處理方法、電腦設備和可讀儲存媒體 | |
US20150208019A1 (en) | Image correction | |
WO2020017327A1 (ja) | ヘッドマウントディスプレイ、およびヘッドマウントディスプレイの制御方法、情報処理装置、表示装置、並びに、プログラム | |
WO2021037157A1 (zh) | 图像识别方法及电子设备 | |
US20200084387A1 (en) | Low power mode for one or more cameras of a multiple camera system | |
TWI739096B (zh) | 資料處理方法和電子設備 | |
US11610558B2 (en) | Method of acquiring outside luminance using camera sensor and electronic device applying the method | |
WO2020248896A1 (zh) | 调节方法、终端及计算机可读存储介质 | |
CN106507005A (zh) | 背光亮度的调节方法及装置 | |
US20170034403A1 (en) | Method of imaging moving object and imaging device | |
KR20170009089A (ko) | 사용자의 제스쳐를 이용하여 기능을 제어하는 방법 및 촬영 장치. | |
US8670038B2 (en) | Projection display device and method of controlling the same | |
EP3621294B1 (en) | Method and device for image capture, computer readable storage medium and electronic device | |
US20160337598A1 (en) | Usage of first camera to determine parameter for action associated with second camera | |
WO2024139877A1 (zh) | 一种环境光校准方法及装置 | |
US9560326B2 (en) | Technologies for projecting a proportionally corrected image | |
WO2020248097A1 (zh) | 图像获取方法、终端及计算机可读存储介质 | |
US20240314450A1 (en) | Image light supplementation method and electronic device | |
US20230283891A1 (en) | Boot sequence in cold temperatures | |
JP2017009916A (ja) | 映像表示装置、映像表示方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19815232 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019815232 Country of ref document: EP Effective date: 20201215 |