CN113298694B - Multi-camera system with flash for depth map generation - Google Patents

Multi-camera system with flash for depth map generation Download PDF

Info

Publication number
CN113298694B
CN113298694B CN202110572369.XA CN202110572369A CN113298694B CN 113298694 B CN113298694 B CN 113298694B CN 202110572369 A CN202110572369 A CN 202110572369A CN 113298694 B CN113298694 B CN 113298694B
Authority
CN
China
Prior art keywords
flash
camera image
camera
depth map
main
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110572369.XA
Other languages
Chinese (zh)
Other versions
CN113298694A (en
Inventor
王超
吴东晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Black Sesame Intelligent Technology Chongqing Co Ltd
Original Assignee
Black Sesame Intelligent Technology Chongqing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Black Sesame Intelligent Technology Chongqing Co Ltd filed Critical Black Sesame Intelligent Technology Chongqing Co Ltd
Publication of CN113298694A publication Critical patent/CN113298694A/en
Application granted granted Critical
Publication of CN113298694B publication Critical patent/CN113298694B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/122Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/254Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/28Indexing scheme for image data processing or generation, in general involving image processing hardware
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10152Varying illumination
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

An example depth map generation operation includes one or more of: simultaneously capturing a main off camera image and an auxiliary off camera image by using an unpowered flash lamp; sparse depth mapping an object based on the primary and secondary off camera images; capturing a main-on camera image using a powered flash; mapping an object based on the primary off camera image and the primary on camera image foreground probability; and densely mapping objects based on the sparse depth map and the foreground probability map.

Description

Multi-camera system with flash for depth map generation
Technical Field
The present disclosure relates to image signal processing, and in particular provides a multi-camera system with flash for depth map generation.
Background
Currently, more and more consumer and robotic systems utilize depth maps of the surrounding environment. The current method includes obtaining a depth map using a stereo camera, a structured light module, and a time-of-flight module. The system is defective when the image contains low texture differences, reflections, transparency and occlusions.
A method is sought that allows depth mapping under suboptimal conditions.
Disclosure of Invention
An exemplary embodiment provides a depth map (depth map) generating method, including at least one of the following steps: simultaneously capturing a main off camera image and an auxiliary off camera image by using an unpowered flash lamp; sparse depth mapping an object based on the primary and secondary off camera images; capturing a main-on camera image using a powered flash; mapping an object based on the primary off camera image and the primary on camera image foreground probability; and densely mapping objects based on the sparse depth map and the foreground probability map.
Another example embodiment provides a non-transitory computer-readable medium comprising instructions that, when read by a processor, cause the processor to perform at least one of: simultaneously capturing a main off camera image and an auxiliary off camera image by using an unpowered flash lamp; sparse depth mapping an object based on the primary and secondary off camera images; capturing a main-on camera image using a powered flash; mapping an object based on the primary off camera image and the primary on camera image foreground probability; and densely mapping objects based on the sparse depth map and the foreground probability map.
Drawings
In the drawings:
FIG. 1 is a first example system diagram according to one embodiment of this disclosure;
FIG. 2 is a second example system diagram according to one embodiment of this disclosure;
FIG. 3 is an example stereoscopic depth system according to one embodiment of the disclosure;
FIG. 4 is an example with foreground occlusion according to one embodiment of the present disclosure;
FIG. 5 is an example with a foreground peeping occlusion according to one embodiment of the present disclosure;
FIG. 6 is an example two-step image capture method according to one embodiment of this disclosure;
FIG. 7 is an example of a first stage flash off image capture with weak edges according to one embodiment of the present disclosure;
FIG. 8 is an image of an example flash illumination according to one embodiment of the present disclosure;
FIG. 9 is an example method flow according to one embodiment of the present disclosure;
FIG. 10 is an exemplary RGBIR system according to one embodiment of the disclosure; and
FIG. 11 is an example method according to one embodiment of this disclosure.
Detailed Description
The examples listed below are only written to illustrate the application of the apparatus and method and are not limiting in scope. Modifications to the equivalent of the apparatus and method should be classified as within the scope of the claims.
Certain terms are used throughout the following description and claims to refer to particular system components. As will be appreciated by one of skill in the art, different companies may refer to a component and/or a method by different names. This document does not intend to distinguish between components and/or methods that differ in name but not function.
In the following discussion and claims, the terms "include" and "comprise" are used in an open-ended fashion, and thus may be interpreted to mean "including, but not limited to. Also, the term "coupled" or "coupled" is intended to mean an indirect or direct connection. Thus, if a first device couples to a second device, that connection may be through a direct connection or through an indirect connection via other devices and connections.
FIG. 1 depicts an example hybrid computing system 100 that may be used to implement a neural network associated with the operation of one or more portions or steps of process 600. In this example, processors associated with the hybrid system include a field programmable gate array (Field Programmable Gate Array, FPGA) 122, a graphics processor unit (Graphical Processor Unit, GPU) 120, and a central processing unit (Central Processing Unit, CPU) 118.
CPU 118, GPU 120, and FPGA 122 have the capability to provide a neural network. A CPU is a general purpose processor that can perform many different functions, which generally results in the ability to perform a number of different tasks, however, the processing of multiple data streams by a CPU is limited, and the functionality of a CPU with respect to a neural network is limited. GPUs are graphics processors with many small processing cores that are capable of processing parallel tasks sequentially. An FPGA is a field programmable device that has the ability to be reconfigured and hard-wired circuitry to perform any function that can be programmed into a CPU or GPU. Since the FPGA is programmed in circuit form, it is many times faster than the CPU and significantly faster than the GPU.
There are other types of processors that the system may include, for example, an acceleration processing unit (Accelerated Processing Unit, APU) comprising on-chip CPU and GPU elements and a digital signal processor (Digital Signal Processor, DSP) designed to perform high speed digital data processing. An application specific integrated circuit (Application Specific Integrated Circuit, ASIC) may also perform the hardwired functions of the FPGA; however, the lead time for designing and producing an ASIC is on the order of a few quarters of a year, rather than the fast turnaround implementation available in programming FPGAs.
Graphics processor unit 120, central processing unit 118, and field programmable gate array 122 are connected and coupled to memory interface controller 112. The FPGA is connected to the memory interface through a programmable logic circuit to memory interconnect 130. Due to the fact that the FPGA operates with a very large bandwidth, this additional device is used and is used to minimize the electronic circuitry utilized from the FPGA to perform the storage tasks. The Memory and interface controller 112 is further connected to a persistent storage disk 110, a system Memory 114, and a Read Only Memory (ROM) 116.
The system of fig. 1 can be used to program and train FPGAs. The GPU works well with unstructured data and can be used for training, once the data has been trained, a deterministic inference model can be found, and the CPU can program the FPGA with model data determined by the GPU.
The storage interface and controller are connected to a central interconnect 124, which is in addition connected to the GPU 120, CPU 118 and FPGA 122. The central interconnect 124 is additionally connected to input and output interfaces 128 and network interfaces 126.
FIG. 2 depicts a second example hybrid computing system 200 that may be used to implement a neural network associated with the operation of one or more portions or steps of process 1100. In this example, the processor associated with the hybrid system includes a Field Programmable Gate Array (FPGA) 210 and a Central Processing Unit (CPU) 220.
The FPGA is electrically connected to an FPGA controller 212 that interfaces with a direct memory access (Direct Memory Access, DMA) 218. The DMA is connected to an input buffer 214 and an output buffer 216, which are coupled to the FPGA to buffer data into and out of the FPGA, respectively. The DMA218 includes two first-in-first-out (First In First Out, FIFO) buffers, one for the host CPU and the other for the FPGA, the DMA allowing data to be written to and read from the appropriate buffers.
On the CPU side of the DMA is a main switch 228 that shuttles data and commands (commands) to the DMA. The DMA is also connected to SDRAM controller 224, which allows data to be shuttled to and from the FPGA to CPU 220, and to external SDRAM 226 and CPU 220. The main switch 228 is connected to a peripheral interface 230. The flash controller 222 controls the persistent memory and is connected to the CPU 220.
Multiple cameras placed at different locations convey information of their surroundings captured in overlapping fields of view. The simplest case is biomimetic binocular vision. The computer stereoscopic vision system places two cameras horizontally offset by a known distance between their optical centers. The two cameras may capture two slightly different views of the same scene. When the scene contains moving objects, the two cameras capture images in a synchronized manner. As shown in fig. 3, light from the object point (a) is transmitted through the entry points of the two pinhole cameras and has two projections (P1 and P2) on the image plane. From the perspective of triangle similarity, the ratio of parallax d= (p1o1+o2p2) to focal length (f) is equal to the ratio of optical center distance (d=c1c2) to depth (Z) of point a:
the two cameras may not be identical coplanar pinhole cameras. In this case, a correction is applied to the image to simulate images captured by two identical coplanar pinhole cameras. This step includes both linear and nonlinear transformations. These transformed parameters are typically calibrated in an off-line calibration step, where the controlled scene is to be captured by the system. In order to recover depth from parallax (disparity), a focal length (f) and a camera distance (D) are used, which may also be calibrated in an off-line calibration step.
To calculate the parallax, pixel pairs are identified as coming from the same object point by comparing pixel pair image similarities. For pixels in the left image, multiple pixels with approximately the same image similarity can be found in the right image, which can lead to a mismatch.
Disparity is currently calculated in a sparse manner. The different pixels are matched and then some inference algorithm can be used to spread this sparse matching information into a dense match. There are at least three basic problems in current stereoscopic systems: texture-less objects, transparency and reflection, and occlusion.
The non-textured background causes difficulties in determining what the "same object point" is. This may result in many false 1-to-N matches. This may also cause "weak edges" separating different objects of similar color and/or brightness.
The transparent object changes the light transmission direction of the surface, which is particularly severe for curved surfaces. Therefore, the triangular relationship shown in fig. 3 does not function properly for transparent objects.
The reflective object also changes the light transmission direction of the surface, which is also particularly severe for curved surfaces. For reflective objects, the triangular relationship shown in fig. 3 again does not function properly.
Occlusion is shown in fig. 4, where a portion of the background (AB) visible in the left image will be blocked by the foreground object 410 in the right image. Another part of the background (CD) visible in the right image will be blocked by objects in the left image. Because of occlusion, these pixels will not find the correct match in another image, so their disparity is uncertain.
In the presence of sharp edges, this may provide useful texture/feature points to find pixel pair matches. In the case where the edges are parallel to the base line, this may increase the difficulty of distinguishing.
The case called "peeking" refers to an image represented by a background area surrounded by a foreground. In this case, there is a blocking area in the peeping. Small peeping areas are more difficult to match.
In the case where the region is not occluded, matching difficulty increases if there is no texture/feature in the region. Peeking may be a combination of occlusion and non-textured objects. For the example in fig. 5, there is a foreground 510 with a hole through which the left camera can see the Background (BD), which is peeping. Within this peeping, the left view CD cannot be seen by the right camera because it is in the occluded area. BC can be seen by the right camera, but if there is no texture/feature in BC, it is difficult to determine whether it is background or foreground.
When capturing images in darkness, such as darkroom, outdoor night, etc., a camera cannot capture a scene with acceptable brightness even if the camera ISO and exposure time are increased without using a flash. In this case, a flash will be turned on to illuminate the scene and allow the camera to capture the scene at an acceptable brightness.
The flash changes the illumination conditions and thus the exposure/ISO control of the camera. Furthermore, a flash is an additional illumination of the scene and the color temperature will also change.
The flashing process can be divided into two phases: pre-flash and co-flash. During the pre-flash phase, the flash is repeatedly energized at pre-flash intensity and the data is analyzed to determine the most appropriate exposure/ISO control parameters, color temperature parameters, and co-flash parameters. The iterative parameter estimation process in the pre-flash results in an inefficient pre-flash process.
The pre-flash phase is followed by a co-flash phase in which the flash is energized (turned on) at a predetermined co-flash intensity and the data is processed with determined parameters to produce the final image.
The stereoscopic photographing method avoids the use of a flash for the following reasons.
The stereo photography method relies on synchronization of brightness and color, and the stereo matching algorithm relies on similar appearance of brightness and color between different cameras. Synchronization between cameras is difficult when the flash is energized (on). The flash intensity, exposure control and color temperature of the main camera and the flash intensity, exposure control and color temperature of the auxiliary camera need to be determined during the pre-flash. The iterative pre-flash process is time inefficient. If the flash is energized (on), the smooth surface of nearby objects may exhibit reflections that cause problems in stereo matching.
In the disclosed method, a camera system includes a flash, a primary camera, and at least one secondary camera. The main camera and the flash are placed close to each other. The secondary camera(s) are placed in different positions with overlapping fields of view with the primary camera. The camera system also includes control logic that controls the primary camera, the secondary camera, and the flash.
Stereoscopic vision may provide depth information based on high confidence feature point matching. Parallax can be translated into depth, or distance along the optical axis. If there are no matching feature points due to weak edges, occlusion, peeping, etc., the depth map will degrade.
The flash acts like a point source of light of intensity L, since the emissivity E of direct irradiation from the flash at the surface point P is
E=L·ρ(ω i0 )·r -2 ·cos (1)
Wherein ρ (ω) i0 ) Is a surface bi-directional reflectance distribution function (Bidirectional Reflectance Distribution Function, BRDF), ω i And omega 0 Is the flash and viewing direction relative to the local coordinate system at P, r is the distance from the flash, and θ is the angle between the flash direction and the surface normal at P. The inverse square law (inverse square law) explains why the flash intensity drops rapidly with distance r. Thus, the foreground is typically illuminated more than the background. It will also be affected by the angle. In stereoscopic vision, the most difficult problem due to weak edges, occlusion and peeping is to determine pixels in the neighborhood, the flash will provide a new viewing angle to divide the pixel area into different layers, as it will increase the contrast on the depth edge, which facilitates determining a more accurate depth layer.
In the disclosed method, the flash provides different contrast over the depth edge. Careful determination of exposure control and color related parameters is not required. For this reason, the pre-flash process is not used. The reflection problem can be ignored because the effective illumination shows which parts of the frame are foreground.
For simplicity of illustration, this example will show two cameras: a primary camera and a secondary camera, however, the actual system may provide a plurality of secondary cameras. The method is divided into three parts: non-flash stereoscopic vision capture, flash single camera capture and determination. In the event of a flash power down (off), the dual camera captures stereoscopic images under control of the back-end system including timing synchronization and brightness/color/AF synchronization. The main camera will then capture a second image with the flash powered on, while the other control parameters are unchanged, at which point a depth map can be determined.
The primary and secondary cameras capture scenes in a synchronized manner in terms of timing and content (color, brightness, focus, etc.). At this stage, the flash is powered off (off). The two images captured are called I Main_off And I aux_off
At this stage, the backend system will energize (turn on) the flash at a preset flash intensity. The backend system will keep the imaging parameters the same as the non-flash process including exposure control, white balance gain, color matrix, focus distance, etc. The main camera will capture an image with the flash powered on, denoted as I Main_on
An example system is shown in fig. 6. In the first phase of flash power down (off), capture I Main_off And I aux_off As shown in fig. 7. The weak edges are marked with a dashed circle, which means that it is difficult to distinguish between a human head and a non-textured background with similar brightness/color. In addition, there are areas of peeping between the legs that are free of texture. Moreover, for a small area to the left of the man's head in the main image, this area is not visible in the auxiliary image, as it is occluded.
When the flash is on, catch I Main_on As shown in fig. 8. Comparison I Main_on (in FIG. 8) and I Main_off (in fig. 7), effectively illuminating a foreground male person. The weak edge around the head is now easily located. The peeping between the legs is not significantly illuminated, but the legs are illuminated, so the peeping is not at a depth similar to the foreground. The blocked tile is not illuminated and is part of the background.
According to equation (1), due to the front in the sceneThe scene is not uniformly illuminated by the flash, so it is related to surface material, angle, distance, etc., and thus I cannot be used Main_on To I Main_off To determine the absolute depth. In many applications, absolute depth is used to determine subsequent operations. Stereo matching is suitable for this case due to the multi-view geometry shown in fig. 3.
An example method of recovering depth from a stereo camera and flash is depicted in fig. 9. In this example, the high confidence corner matches may be sums of absolute distance matches, local binary pattern matches, harris (Harris) corner detection and matches, scale-invariant feature transform corner detection and matches, and the like.
Illumination confidence extraction process analysis I Main_off In (1) pixels at I Main_on To what extent it is illuminated. It may take the form of any differential operation, e.g. I Main_on -I Main_off ,I Main_on /I Main_offEtc., wherein G -1 (·) represents an inverse gamma operation or the like that converts data back into a linear domain.
The current propagation process from sparse to dense depth maps is guided by information from the image itself (e.g., luminance similarity, edges, and smoothness). The example method of fig. 9 includes attaching a foreground probability map as an additional data item forcing the pixel to propagate from the background or foreground. The flash of the disclosed method illuminates the foreground and provides additional information about the depth edge.
In another example embodiment, an infrared-enabled RGBIr camera may capture a scene in the visible and near infrared spectral bands. The infrared flash may illuminate the scene in the infrared band, which may be captured by an RGBIr camera but not visible to the human eye. Thus, this possible extension is shown in fig. 10. The system may enhance data capture in bright environments.
FIG. 11 depicts an example depth map generation method, comprising: simultaneously capturing (1110) a primary off camera image and a secondary off camera image with an unpowered flash; sparse depth map (mapping) (1112) objects based on the primary and secondary off camera images; capturing (1114) a main on camera image with a powered flash; mapping (1116) the object based on the primary off camera image and the primary on camera image foreground probabilities; and densely mapping (1118) the object based on the sparse depth map and the foreground probability map.
In the example of fig. 11, sparse depth map may be based on corner map, foreground probability map may be extracted based on confidence, and dense depth map may be propagated based on high confidence corners.
The main on camera image may be captured with a similar set of control parameters as the simultaneous capture, and the powered flash may be set to a preset flash intensity, and the powered flash may be one of a visible light flash and an infrared flash.
The foreground probability map may be based on a set of pixel differences between the primary off camera image and the primary on camera image, and the primary off camera image and the secondary off camera image may be captured by the RGB camera.
In another embodiment, the primary and secondary off camera images may be captured by an RGBIr camera.
In another embodiment, a non-transitory computer-readable medium is provided that includes instructions that, when read by a processor, cause the processor to perform at least one of: simultaneously capturing a main off camera image and an auxiliary off camera image by using an unpowered flash lamp; sparse depth mapping an object based on the primary and secondary off camera images; capturing a main-on camera image using a powered flash; mapping an object based on the primary off camera image and the primary on camera image foreground probability; and densely mapping objects based on the sparse depth map and the foreground probability map.
Those of skill in the art will appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. The various components and blocks may be arranged differently (e.g., arranged in a different order, or divided in a different manner) without departing from the scope of the subject technology.
It should be understood that the specific order or hierarchy of steps in the processes disclosed is an illustration of example approaches. Based on design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged. Some steps may be performed simultaneously. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. The foregoing description provides various examples of the subject technology, and the subject technology is not limited to these examples. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean "one and only one" unless specifically so stated, but rather "one or more". The term "some" refers to one or more unless specifically stated otherwise. Male pronouns (e.g., his) include female and neutral (e.g., her and its), and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the invention. The predicate words "configured", "operable" and "programmed" do not imply any particular tangible or intangible modification to a subject, but are intended to be used interchangeably. For example, a processor being configured to monitor and control operations or components may also mean that the processor is programmed to monitor and control operations, or that the processor is operable to monitor and control operations. Likewise, a processor being configured to execute code may be interpreted as the processor being programmed to execute code or operable to execute code.
A phrase such as an "aspect" does not imply that such aspect is essential to the subject technology, or that such aspect applies to all configurations of the subject technology. The disclosure relating to one aspect may apply to all configurations, or one or more configurations. One aspect may provide one or more examples. A phrase such as an aspect may refer to one or more aspects and vice versa. Phrases such as "an embodiment" do not imply that such an embodiment is essential to the subject technology, or that such an embodiment applies to all configurations of the subject technology. The disclosure relating to one embodiment may apply to all embodiments, or one or more embodiments. Embodiments may provide one or more examples. A phrase such as an "embodiment" may refer to one or more embodiments and vice versa. Phrases such as "configuration" do not imply that such a configuration is necessary for the subject technology, or that such a configuration applies to all configurations of the subject technology. The disclosure relating to one configuration may apply to all configurations, or one or more configurations. The configuration may provide one or more examples. A phrase such as "configured" may refer to one or more configurations and vice versa.
The word "example" is used herein to mean "serving as an example or illustration. Any aspect or design described herein as "example" is not necessarily to be construed as preferred or advantageous over other aspects or designs.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. Furthermore, to the extent that the terms "includes," "including," "has," and the like are used in either the detailed description or the claims, such terms are intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
References to "one embodiment," "an embodiment," "some embodiments," "various embodiments," etc., indicate that a particular element or feature is included in at least one embodiment of the invention. Although these phrases may appear in various places, they do not necessarily refer to the same embodiment. Those of skill in the art will be able to devise and incorporate any of the various mechanisms adapted to carry out the above-described functions in conjunction with the present disclosure.
It should be understood that this disclosure teaches only one example of an exemplary embodiment, and that many variations of the invention can be readily devised by those skilled in the art after reading this disclosure, and the scope of the invention is determined by the claims that follow.

Claims (18)

1. A depth map generation method, comprising:
simultaneously capturing a main off camera image and an auxiliary off camera image by using an unpowered flash lamp;
sparse depth mapping an object based on the primary and secondary off camera images;
capturing a main-on camera image using a powered flash;
mapping the object based on the primary off camera image and the primary on camera image foreground probability; and
the object is densely depth mapped based on the sparse depth map and the foreground probability map.
2. The depth map generating method according to claim 1, wherein the sparse depth map is based on corner mapping.
3. The depth map generating method according to claim 1, wherein the foreground probability map is extracted based on a confidence level.
4. The depth map generation method of claim 1, wherein the dense depth map is propagated based on high confidence corner points.
5. The depth map generating method according to claim 1, wherein the powered flash is set to a preset flash intensity.
6. The depth map generating method of claim 1, wherein the foreground probability map is based on a set of pixel differences between the main off camera image and the main on camera image.
7. The depth map generating method according to claim 1, wherein the main off camera image and the auxiliary off camera image are captured by an RGB camera.
8. The depth map generating method according to claim 1, wherein the main off camera image and the auxiliary off camera image are captured by an RGBIr camera.
9. The depth map generating method of claim 1, wherein the powered flash is one of a visible light flash and an infrared flash.
10. A non-transitory computer-readable medium comprising instructions that, when read by a processor, cause the processor to perform:
simultaneously capturing a main off camera image and an auxiliary off camera image by using an unpowered flash lamp;
sparse depth mapping an object based on the primary and secondary off camera images;
capturing a main-on camera image using a powered flash;
mapping the object based on the primary off camera image and the primary on camera image foreground probability; and
the object is densely depth mapped based on the sparse depth map and the foreground probability map.
11. The non-transitory computer-readable medium of claim 10, wherein the sparse depth map is based on a corner map.
12. The non-transitory computer-readable medium of claim 10, wherein the foreground probability map is based on a confidence extraction.
13. The non-transitory computer-readable medium of claim 10, wherein the dense depth map is based on high confidence corner propagation.
14. The non-transitory computer readable medium of claim 10, wherein the powered flash is set to a preset flash intensity.
15. The non-transitory computer-readable medium of claim 10, wherein the foreground probability map is based on a set of pixel differences between the main off camera image and the main on camera image.
16. The non-transitory computer-readable medium of claim 10, wherein the primary and secondary off camera images are captured by an RGB camera.
17. The non-transitory computer-readable medium of claim 10, wherein the primary and secondary off camera images are captured by an RGBIr camera.
18. The non-transitory computer readable medium of claim 10, wherein the powered flash is one of a visible light flash and an infrared flash.
CN202110572369.XA 2020-10-12 2021-05-25 Multi-camera system with flash for depth map generation Active CN113298694B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/068,180 2020-10-12
US17/068,180 US11657529B2 (en) 2020-10-12 2020-10-12 Multiple camera system with flash for depth map generation

Publications (2)

Publication Number Publication Date
CN113298694A CN113298694A (en) 2021-08-24
CN113298694B true CN113298694B (en) 2023-08-08

Family

ID=77324766

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110572369.XA Active CN113298694B (en) 2020-10-12 2021-05-25 Multi-camera system with flash for depth map generation

Country Status (2)

Country Link
US (1) US11657529B2 (en)
CN (1) CN113298694B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101918893A (en) * 2007-12-27 2010-12-15 高通股份有限公司 Method and apparatus with depth map generation
CN103748893A (en) * 2011-08-15 2014-04-23 微软公司 Display as lighting for photos or video
CN105247859A (en) * 2013-04-15 2016-01-13 微软技术许可有限责任公司 Active stereo with satellite device or devices
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN105959581A (en) * 2015-03-08 2016-09-21 联发科技股份有限公司 Electronic device having dynamically controlled flashlight for image capturing and related control method
CN106688012A (en) * 2014-09-05 2017-05-17 微软技术许可有限责任公司 Depth map enhancement
CN110493587A (en) * 2019-08-02 2019-11-22 深圳市灵明光子科技有限公司 Image acquiring device and method, electronic equipment, computer readable storage medium
CN111062981A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image processing method, device and storage medium

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE60325536D1 (en) * 2002-09-20 2009-02-12 Nippon Telegraph & Telephone Apparatus for generating a pseudo-three-dimensional image
US7606417B2 (en) * 2004-08-16 2009-10-20 Fotonation Vision Limited Foreground/background segmentation in digital images with differential exposure calculations
JP2009276294A (en) * 2008-05-16 2009-11-26 Toshiba Corp Image processing method
US8693731B2 (en) * 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9571818B2 (en) * 2012-06-07 2017-02-14 Nvidia Corporation Techniques for generating robust stereo images from a pair of corresponding stereo images captured with and without the use of a flash device
US9519972B2 (en) * 2013-03-13 2016-12-13 Kip Peli P1 Lp Systems and methods for synthesizing images from image data captured by an array camera using restricted depth of field depth maps in which depth estimation precision varies
AU2013206597A1 (en) * 2013-06-28 2015-01-22 Canon Kabushiki Kaisha Depth constrained superpixel-based depth map refinement
IN2013CH05374A (en) * 2013-11-21 2015-05-29 Nokia Corp
US20150235408A1 (en) * 2014-02-14 2015-08-20 Apple Inc. Parallax Depth Rendering
US9679387B2 (en) * 2015-02-12 2017-06-13 Mitsubishi Electric Research Laboratories, Inc. Depth-weighted group-wise principal component analysis for video foreground/background separation
US9712809B2 (en) * 2015-05-22 2017-07-18 Intel Corporation Integrated digital camera platform with NIR apodization filter for enhanced depth sensing and image processing
US10419741B2 (en) * 2017-02-24 2019-09-17 Analog Devices Global Unlimited Company Systems and methods for compression of three dimensional depth sensing
US11218626B2 (en) 2017-07-28 2022-01-04 Black Sesame International Holding Limited Fast focus using dual cameras
US10362296B2 (en) * 2017-08-17 2019-07-23 Microsoft Technology Licensing, Llc Localized depth map generation
US10375378B2 (en) 2017-12-12 2019-08-06 Black Sesame International Holding Limited Dual camera system for real-time depth map generation
CN108648225B (en) * 2018-03-31 2022-08-02 奥比中光科技集团股份有限公司 Target image acquisition system and method
US10742892B1 (en) * 2019-02-18 2020-08-11 Samsung Electronics Co., Ltd. Apparatus and method for capturing and blending multiple images for high-quality flash photography using mobile electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101918893A (en) * 2007-12-27 2010-12-15 高通股份有限公司 Method and apparatus with depth map generation
CN103748893A (en) * 2011-08-15 2014-04-23 微软公司 Display as lighting for photos or video
CN105247859A (en) * 2013-04-15 2016-01-13 微软技术许可有限责任公司 Active stereo with satellite device or devices
CN106688012A (en) * 2014-09-05 2017-05-17 微软技术许可有限责任公司 Depth map enhancement
CN105959581A (en) * 2015-03-08 2016-09-21 联发科技股份有限公司 Electronic device having dynamically controlled flashlight for image capturing and related control method
CN105869167A (en) * 2016-03-30 2016-08-17 天津大学 High-resolution depth map acquisition method based on active and passive fusion
CN110493587A (en) * 2019-08-02 2019-11-22 深圳市灵明光子科技有限公司 Image acquiring device and method, electronic equipment, computer readable storage medium
CN111062981A (en) * 2019-12-13 2020-04-24 腾讯科技(深圳)有限公司 Image processing method, device and storage medium

Also Published As

Publication number Publication date
CN113298694A (en) 2021-08-24
US20220114745A1 (en) 2022-04-14
US11657529B2 (en) 2023-05-23

Similar Documents

Publication Publication Date Title
US9392262B2 (en) System and method for 3D reconstruction using multiple multi-channel cameras
US10068369B2 (en) Method and apparatus for selectively integrating sensory content
US10002463B2 (en) Information processing apparatus, information processing method, and storage medium, for enabling accurate detection of a color
US9858722B2 (en) System and method for immersive and interactive multimedia generation
US11330200B2 (en) Parallax correction using cameras of different modalities
CN108648225B (en) Target image acquisition system and method
CN108683902B (en) Target image acquisition system and method
EP3381015B1 (en) Systems and methods for forming three-dimensional models of objects
JP2013168146A (en) Method, device and system for generating texture description of real object
US20220277512A1 (en) Generation apparatus, generation method, system, and storage medium
US20200045277A1 (en) Projection device, projection system and an image calibration method
CN109427089B (en) Mixed reality object presentation based on ambient lighting conditions
US20190302598A1 (en) Projection device, projection method, and projection control program
CN113298694B (en) Multi-camera system with flash for depth map generation
CN211454632U (en) Face recognition device
JP5441752B2 (en) Method and apparatus for estimating a 3D pose of a 3D object in an environment
US20230316640A1 (en) Image processing apparatus, image processing method, and storage medium
CN111696146A (en) Face model reconstruction method, face model reconstruction system, image processing system and storage medium
JP2023553259A (en) dark flash normal camera
CN110245618B (en) 3D recognition device and method
JP7179164B2 (en) Image pair correction method and apparatus
WO2018180860A1 (en) Image processing device, image processing method, and program
Chotikakamthorn Near point light source location estimation from shadow edge correspondence
TWI853329B (en) 3d image sensing device with 3d image processing function and 3d image processing method applied thereto
US20240144575A1 (en) 3d image sensing device with 3d image processing function and 3d image processing method applied thereto

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant