US10482620B2 - Method and device for producing depth information - Google Patents
Method and device for producing depth information Download PDFInfo
- Publication number
- US10482620B2 US10482620B2 US16/225,297 US201816225297A US10482620B2 US 10482620 B2 US10482620 B2 US 10482620B2 US 201816225297 A US201816225297 A US 201816225297A US 10482620 B2 US10482620 B2 US 10482620B2
- Authority
- US
- United States
- Prior art keywords
- monochromatic
- images
- image
- resolution
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related
Links
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000000295 complement effect Effects 0.000 claims abstract description 56
- 230000008569 process Effects 0.000 claims abstract description 33
- 230000002708 enhancing effect Effects 0.000 claims abstract description 31
- 230000001965 increasing effect Effects 0.000 claims abstract description 12
- 238000004519 manufacturing process Methods 0.000 claims abstract description 7
- 238000013500 data storage Methods 0.000 claims description 14
- 238000013473 artificial intelligence Methods 0.000 claims description 12
- 230000003287 optical effect Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000004075 alteration Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 230000007423 decrease Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 230000000116 mitigating effect Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- GWEVSGVZZGPLCZ-UHFFFAOYSA-N Titan oxide Chemical compound O=[Ti]=O GWEVSGVZZGPLCZ-UHFFFAOYSA-N 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000015654 memory Effects 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B5/00—Optical elements other than lenses
- G02B5/20—Filters
- G02B5/201—Filters in the form of arrays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B5/00—Optical elements other than lenses
- G02B5/20—Filters
- G02B5/22—Absorbing filters
- G02B5/23—Photochromic filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4053—Super resolution, i.e. output image resolution higher than sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/001—Image restoration
- G06T5/003—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- G06T5/60—
-
- G06T5/73—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/557—Depth or shape recovery from multiple images from light fields, e.g. from plenoptic cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/15—Processing image signals for colour aspects of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10052—Images from lightfield camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- Embodiments of the disclosure relate to methods and devices for producing depth information.
- the depth of an object may be obtained actively by adopting a separate light source or passively using the ambient light.
- the two schemes have their own pros and cons.
- Embodiments of the disclosure regard technology of producing depth information using a passive scheme, and particularly, methods and devices for producing depth information using a stereo camera. Stereo images are generated using a stereo camera module.
- FIGS. 1 a and 1 b are views illustrating a stereo camera module according to the prior art.
- the stereo camera module includes image sensors I (e.g., charged coupling devices (CCDs) or complementary metal oxide semiconductor (CMOS)), color filters 13 mounted on the image sensors I, and two lens modules 11 and 12 positioned over the color filters 13 to focus light beams from an object O on the image sensors I.
- image sensors I e.g., charged coupling devices (CCDs) or complementary metal oxide semiconductor (CMOS)
- CMOS complementary metal oxide semiconductor
- Each color filter 13 is partitioned into multiple red (R), green (G), and blue (B) portions which transmit their respective corresponding wavelengths of light.
- Each pixel of an image captured represents a color by interpolating and combining the R, G, and B values adjacent the pixel of the image sensor I.
- the prior art may geometrically calculate the depth of an object by measuring the distance between two matching pixels in a stereo image captured for the object using two lens modules 11 and 12 .
- a method for producing depth information comprises obtaining a plurality of images of an object from a plurality of lens modules, the plurality of images including at least two monochromatic images forming a monochromatic stereo image, producing two complemented monochromatic images by performing a complementary image enhancing process on at least part of the two monochromatic images, the complementary image enhancing process including comparing the plurality of images with each of the two monochromatic images and increasing a resolution of each of the two monochromatic images using an image having a higher resolution in the at least part, selecting a region of interest of the object from the two complemented monochromatic images, and calculating a depth of the region of interest by stereo-matching the two complemented monochromatic images.
- the plurality of images include at least one color image of the object.
- the color image includes a blurred region different from a blurred region of at least one of the two monochromatic images or includes more pixels per unit area than each of the two monochromatic images.
- the complementary image enhancing process includes increasing a resolution of the blurred region of the at least one of the two monochromatic images using the color image.
- the color image includes a blurred region different from a blurred region of at least one of the two monochromatic images or includes more pixels per unit area than each of the two monochromatic images.
- the complementary image enhancing process includes increasing a resolution of the blurred region of the at least one of the two monochromatic images using the color image.
- the two monochromatic images may at least partially include different blurred regions.
- the complementary image enhancing process may include increasing the resolution of the blurred region of one of the two monochromatic images using a corresponding region of the other of the two monochromatic images.
- At least some of the plurality of lens modules may have different f-numbers.
- two lens modules configured to form the two monochromatic images may be spaced apart from each other on an image sensor.
- the complementary image enhancing process may include further using a super resolution method to increase the resolution of the region of interest.
- the super resolution method may include using artificial intelligence (AI) data obtained by AI-learning a plurality of low-resolution images and a plurality of high-resolution images corresponding to the plurality of low-resolution images.
- AI artificial intelligence
- a device configured to produce depth information comprises a monochromatic stereo camera module configured to capture at least one monochromatic stereo image from an object, a data storage unit configured to store the monochromatic stereo image captured by the monochromatic stereo camera module, and a data processor configured to process the monochromatic stereo image stored in the data storage unit.
- the monochromatic stereo camera module includes an image sensor, a plurality of lens modules spaced apart from each other on the image sensor, and monochromatic filters disposed to allow light paths formed by each of the plurality of lens modules and the image sensor to pass therethrough.
- the monochromatic filters may be disposed ahead of the lens modules or between the lens modules and the image sensor.
- the monochromatic filters, the lens modules, and the image sensor may be aligned along light paths in the order of the monochromatic filters, the lens modules, and the image sensor or in the order of the lens modules, the monochromatic filters, and the image sensor.
- At least two of the plurality of lens modules have different depths of field (DOFs).
- the data processor is configured to calculate a depth of a region of interest by comparing at least two monochromatic images forming the monochromatic stereo image, performing a complementing process to raise a resolution of one monochromatic image containing a blurred region among the at least two monochromatic images using another among the at least two monochromatic images, and performing stereo matching.
- the plurality of lens modules include four lens modules.
- the monochromatic filters include a first monochromatic filter and a second monochromatic filter having a different color from the first monochromatic filter, wherein the first monochromatic filter is disposed to allow light paths formed by the image sensor and each of two of the four lens modules to pass therethrough, and the second monochromatic filter is disposed to allow light paths formed by the image sensor and each of the rest of the four lens modules to pass therethrough.
- the monochromatic filters may be disposed ahead of the lens modules or between the lens modules and the image sensor.
- the monochromatic filters, the lens modules, and the image sensor may be aligned along light paths in the order of the monochromatic filters, the lens modules, and the image sensor or in the order of the lens modules, the monochromatic filters, and the image sensor.
- At least two of the plurality of lens modules may have different f-numbers.
- a device for producing depth information comprises a monochromatic stereo camera module configured to capture at least one monochromatic stereo image from an object, a color camera module configured to capture a color image from the object, a data storage unit configured to store the images captured by the monochromatic stereo camera and the color camera module, and a data processor configured to process the images stored in the data storage unit.
- the monochromatic stereo camera module may include a first image sensor, a plurality of first lens modules spaced apart from each other on the first image sensor, and a plurality of monochromatic filters disposed to allow light paths formed by the first image sensor and each of at least two of the plurality of first lens modules to pass therethrough.
- the monochromatic filter may be disposed ahead of each lens module or between each lens module and the first image sensor.
- the monochromatic filter, each lens module, and the first image sensor may be aligned along the light paths in the order of the monochromatic filter, the lens module, and the first image sensor or in the order of the lens module, the monochromatic filter, and the first image sensor.
- the color camera module includes a second image sensor, a second lens module disposed on the second image sensor, and a color filter disposed to allow a light path formed by the second lens module and the second image sensor to pass therethrough.
- the color filter may be disposed between the second lens module and the second image sensor.
- the color filter, the second lens modules, and the second image sensor may be aligned along the light path in the order of the second lens module, the second filter, and the second image sensor.
- the data processor is configured to calculate a depth of a region of interest by performing a complementing process to a resolution of at least one of at least two monochromatic images forming the monochromatic stereo image using the color image and performing stereo matching.
- the color image is configured to be a complementary image to the monochromatic image by including a blurred region different from a blurred region of at least one of the at least two monochromatic images or including more pixels per unit area than the monochromatic images.
- the monochromatic stereo camera module may further include a lens formed of a metamaterial.
- FIGS. 1 a and 1 b are views illustrating a stereo camera module according to the prior art
- FIG. 2 is a view illustrating the concept of complementary image enhancing according to an embodiment
- FIG. 3 is a flowchart illustrating an AI-applied super-resolution method according to an embodiment
- FIG. 4 is a view illustrating a configuration of a depth information producing device according to an embodiment
- FIGS. 5 a and 5 b are views illustrating a monochromatic stereo camera module of a depth information producing device according to an embodiment
- FIGS. 6 and 7 are views illustrating examples of a monochromatic stereo camera module with a plurality of lens modules in a depth information producing device according to embodiments;
- FIG. 8 is a view illustrating a configuration of a depth information producing device according to an embodiment.
- FIG. 9 is a flowchart illustrating a depth information producing method according to an embodiment.
- denotations as “first,” “second,” “A,” “B,” “(a),” and “(b),” may be used in describing the components of the present invention. These denotations are provided merely to distinguish a component from another, and the essence of the components is not limited by the denotations in light of order or sequence.
- the term “near limit of acceptable sharpness” is defined as a distance from a camera, at which objects start to appear acceptably sharp in an image
- the term “far limit of acceptable sharpness” is defined as a distance from the camera, beyond which objects stop appearing acceptably sharp in an image.
- “near limit of acceptable sharpness” and “far limit of acceptable sharpness,” respectively, may refer to the shortest and longest distance at which objects in an image may appear sharp.
- the term “depth of field” is defined as a distance between the near limit of acceptable sharpness and the far limit of acceptable sharpness, at which objects in an image appear sharp
- the term “object plane” is defined as the plane or position which brings about best focusing in designing lens modules. Thus, the object plane exists between the near limit of acceptable sharpness and the far limit of acceptable sharpness.
- FIG. 2 is a view illustrating the concept of complementary image enhancing according to an embodiment.
- lens module may encompass a single lens or multiple lenses
- camera module may encompass a lens module, an optical filter (e.g., a monochromatic filter or color filter), and an image sensor.
- optical filter e.g., a monochromatic filter or color filter
- An object positioned out of the DOF of a camera module may be blurred, exhibiting a reduced resolution.
- a blurred region of an image may be enhanced using two camera modules with different DOFs. This may also apply where two lens modules share one image sensor.
- first lens module L 1 has a first near limit of acceptable sharpness N 1 and a first far limit of acceptable sharpness F 1
- a second lens module L 2 has a second near limit of acceptable sharpness N 2 and a second far limit of acceptable sharpness F 2 (where F 2 may be infinite) as shown in FIG. 2
- an object positioned between the first far limit of acceptable sharpness F 1 and the second far limit of acceptable sharpness F 2 may be blurred where it is the one captured by the first lens module L 1 , but appear sharp where captured by the second lens module L 2 .
- an object positioned between the first near limit of acceptable sharpness N 1 and the second near limit of acceptable sharpness N 2 may be shown sharp where it is the one captured by the first lens module L 1 but not—e.g., blurred—where captured by the second lens module L 2 .
- images captured by the first lens module L 1 may be enhanced using the pixels of images captured by the second lens module L 2
- images captured by the second lens module L 2 may be enhanced by the pixels of images captured by the first lens module L 1 .
- a whole sharp image may be obtained by using both an image with a blurred region and another with a sharp region.
- first image there are two images, e.g., a first image and a second image.
- first image may be referred to as a ‘complementary image’ to the second image.
- second image may be referred to as a ‘complementary image’ to the first image.
- first image and the second image may be referred to as ‘mutually complementary images.’
- the first camera module providing the first image and the second camera module providing the second image may be referred to as ‘mutually complementary camera modules.’
- the lens modules used to capture the first and second images may be referred to as ‘mutually complementary lens modules.’
- f-number (f/#) is defined as f/D.
- f-number As f-number decreases, objects positioned closer or farther than the object plane may become out of focus and further blurred, causing the DOF to decrease. In other words, as F-number increases, the DOF may generally increase. Although the DOF may be reduced by decreasing f-number, D may be designed to be larger to raise the resolution of the lens module.
- FIG. 3 is a flowchart illustrating an AI-applied super-resolution method according to an embodiment.
- an image captured by a camera module may be blurred as set forth above.
- in-depth view of the image may reveal an optical blur due to, e.g., optical limitations (e.g., point spread function (PSF) or diffraction) or other limitations (e.g., image sensor (e.g., CCD or CMOS) size limit) to the lens module (e.g., shrunken sensor pixels may have difficulty in receiving light, thus causing short noise).
- optical limitations e.g., point spread function (PSF) or diffraction
- image sensor e.g., CCD or CMOS
- shrunken sensor pixels may have difficulty in receiving light, thus causing short noise.
- An example may be shown from a photo taken by smartphone—the photo may appear blurred when magnified to the maximum.
- Embodiments of the disclosure may enhance images using the above-described complementary image enhancing and further enhance the images by performing a super resolution method on blurred pixels that may remain despite the complementary image enhancing.
- the super resolution method is a method to obtain a high-resolution (or high-definition) image from a low-resolution image containing noise or blurs, and this method may be realized by a wavelet method, a frequency domain method, or a statistical method.
- the super resolution method may also be implemented using artificial intelligence (AI).
- an AI-applied super resolution method may be used to raise the resolution of blur-containing stereo images.
- FIG. 3 illustrates an example of such an AI-applied super resolution method.
- a cost function as represented in Equation 1 below to N blur-containing images Yi and N corresponding blur-free images Xi (where N is a positive integer) and learning using a learning algorithm, e.g., a convolutional neutral network (CNN) (refer to S 101 , S 102 , and S 103 of FIG. 3 ).
- CNN convolutional neutral network
- F is the AI transformation function
- L is the cost function
- CNN is used herein as an AI scheme, this is merely an example, and embodiments of the disclosure are not limited thereto.
- Other various AI schemes such as a generative adversarial network (GAN), may be applied as well.
- GAN generative adversarial network
- FIG. 4 is a view illustrating a configuration of a depth information producing device according to an embodiment.
- FIGS. 5 a and 5 b are views illustrating a monochromatic stereo camera module of a depth information producing device according to an embodiment.
- a depth information producing device 500 may include a monochromatic stereo camera module 100 for capturing at least one monochromatic stereo image, a data storage unit 200 for storing the monochromatic stereo image captured by the monochromatic stereo camera module 100 , and a data processor 300 for processing the monochromatic stereo image stored in the data storage unit 200 .
- the monochromatic stereo camera module 100 may include an image sensor I, a plurality of lens modules 111 and 112 spaced apart from each other on the image sensor I, and a monochromatic filter 120 disposed on the image sensor I to meet light paths formed by the image sensor I and each of the plurality of lens modules 111 and 112 .
- the monochromatic filter 120 may be disposed ahead of each of the lens modules 111 and 112 or between each of the lens modules 111 and 112 and the image sensor I.
- the monochromatic filter 120 , each lens module 111 or 112 , and the image sensor I may be aligned along light paths in the order of the monochromatic filter 120 , the lens module 111 or 112 , and the image sensor I or in the order of the lens module 111 or 112 , the monochromatic filter 120 , and the image sensor I.
- the monochromatic filter 120 may be a single-color filter, e.g., a green (G) filter as shown in FIG. 5 .
- the monochromatic filter 120 may be a red (R) filter, a blue (B) filter, or any one of other various filters to filter a single color.
- the monochromatic filter 120 may be implemented as a combination of single-color monochromatic filters each of which may correspond to a respective one of a plurality of lens modules, e.g., 111 and 112 .
- the plurality of lens modules 111 and 112 may include mutually complementary lens modules.
- at least two of the plurality of lens modules 111 and 112 may have different f-numbers and thus different DOFs.
- the first lens module 111 may present a better (e.g., higher) resolution for shorter distances
- the second lens module 112 may have a larger f-number than the first lens module 111 , thus presenting a better (e.g., higher) resolution for long distances.
- the data storage unit 200 may store data including the results of AI learning.
- the data storage unit 200 described herein throughout the specification may be implemented as various types of devices to store data, including, but not limited to, hard-disk drives or memories, such as RAMs or ROMs.
- the data processor 300 may compare at least two monochromatic images forming the monochromatic stereo image and perform a complementing process to raise the resolution of one of the two monochromatic images, which contains a blurred region, using a region, corresponding to the blurred region, of the other of the two monochromatic images, and stereo matching, thereby obtaining the depth of a region of interest.
- the region of interest may be selected by previously designating a particular pixel, such as where the maximum image pixel data value (e.g., raw data) is locally obtained from the pixels of the image sensor I or where a noticeable variation in pixel data value occurs or may be selected by a display (not shown) of the depth information producing device.
- the region of interest may be chosen in any other various manners.
- the region of interest may be the whole image.
- the use of the monochromatic filter 120 may significantly reduce errors in the mutually complementary lens modules 111 and 112 .
- the lens modules e.g., the lens modules 11 and 12 of FIG. 1
- the mutually complementary lens modules 111 and 112 may take only a single color (e.g., green in the above-described example) into consideration, thus freeing from chromatic aberration errors and allowing for little or no distortion and hence a reduction in optical errors and an increase in resolution even with a smaller number of lenses. This may significantly reduce calibration errors that may arise due to lens modules upon depth calculation.
- the lenses of the monochromatic stereo camera modules to capture monochromatic stereo images may be formed of a metamaterial.
- the metamaterial is a material formed in a predetermined nanoscale pattern using a metal or dielectric (e.g., titanium dioxide (TiO 2 )) similar or smaller in size than the wavelength of light to have a property that is not found in naturally occurring materials.
- a metal or dielectric e.g., titanium dioxide (TiO 2 )
- the metamaterial may mitigate resolution deterioration that the other lens materials wouldn't and present a subwavelength resolution.
- forming the lens modules 111 and 112 of the monochromatic stereo camera module 100 using the metamaterial may considerably reduce calibration errors, allowing for more accurate calculation.
- the structure in which the mutually complementary lens modules 111 and 112 are installed in one image sensor I may much further reduce pixel mismatches upon calculating the distance between positions on the image sensor I corresponding to regions of interest of the object than the conventional structure in which an image sensor is provided for each lens module.
- the use of the mutually complementary lens modules 111 and 112 may enable easier image enhancement on the blurred region of the image captured by each lens module 111 and 112 regardless of whether the object is positioned close or far away, thereby leading to an increased resolution.
- a low-resolution region may be rendered to have a higher resolution by using the lens modules with different object plane positions.
- an AI-adopted or AI-based super resolution method is used to allow the region of interest a further enhanced resolution.
- the depth of the region of interest may be precisely calculated and obtained.
- FIGS. 6 and 7 are views illustrating examples of a monochromatic stereo camera module with a plurality of lens modules in a depth information producing device according to embodiments.
- a monochromatic stereo camera module 100 of a depth information producing device may include four lens modules 111 , 112 , 113 , and 114 .
- the monochromatic stereo camera module 100 may further include first monochromatic filters 121 and 122 and second monochromatic filters 123 and 124 which are different in color from the first monochromatic filters 121 and 122 .
- the first monochromatic filters 121 and 122 are disposed to allow light paths formed by two 111 and 112 of the four lens modules 111 , 112 , 113 , and 114 and the image sensor I to pass therethrough.
- the first monochromatic filters 121 and 122 may be disposed ahead of the lens modules 111 and 112 or between the lens modules 111 and 112 and the image sensor I.
- the first monochromatic filters 121 and 122 , the lens modules 111 and 112 , and the image sensor I may be aligned along light paths in the order of the first monochromatic filters 121 and 122 , the lens modules 111 and 112 , and the image sensor I or in the order of the lens modules 111 and 112 , the first monochromatic filters 121 and 122 , and the image sensor I.
- the second monochromatic filters 123 and 124 are disposed to allow light paths formed by the other two 113 and 114 and the image sensor I to pass therethrough.
- the second monochromatic filters 123 and 124 may be disposed ahead of the lens modules 113 and 114 or between the lens modules 113 and 114 and the image sensor I.
- the second monochromatic filters 123 and 124 , the lens modules 113 and 114 , and the image sensor I may be aligned along light paths in the order of the second monochromatic filters 123 and 124 , the lens modules 113 and 114 , and the image sensor I or in the order of the lens modules 113 and 114 , the second monochromatic filters 123 and 124 , and the image sensor I.
- the monochromatic stereo camera module 100 is structured so that the monochromatic filters 121 , 122 , 123 , and 124 are mounted apart at a predetermined distance over the single image sensor I (e.g., CCD or CMOS), and the four lens modules 111 , 112 , 113 , and 114 configured to focus on the single image sensor are mounted apart at a predetermined distance from the monochromatic filters 121 , 122 , 123 , and 124 .
- the single image sensor I e.g., CCD or CMOS
- the monochromatic filters 121 , 122 , 123 , and 124 may be configured with two green filters diagonally arrayed and two blue filters diagonally arrayed.
- the monochromatic filters 121 , 122 , 123 , and 124 may be configured with two red filters and two green filters or two red filters and two blue filters, or with two filters in a first color and the other two in a second color different from the first color.
- the first and second lens modules 111 and 112 may be mutually complementary lens modules
- the third and fourth lens modules 113 and 114 may be mutually complementary lens modules.
- a first group V of the first and third lens modules 111 and 113 may present a better resolution for short distances
- a second group X of the second and fourth lens modules 112 and 114 may be larger in f-number than the first group V and thus present a better resolution for long distances.
- a low-resolution region may be rendered to have a higher resolution by using the lens modules with different object plane positions. Further, the resolution of the region of interest may be further enhanced by a super resolution method.
- the monochromatic stereo camera module 100 is structured so that monochromatic filters 121 , 122 , 123 , 124 , 125 , and 126 are mounted apart at a predetermined distance over a single image sensor I (e.g., CCD or CMOS), and six lens modules 111 , 112 , 113 , 114 , 115 , and 116 configured to focus on the single image sensor are mounted apart at a predetermined distance from the monochromatic filters 121 , 122 , 123 , 124 , 125 , and 126 .
- a single image sensor I e.g., CCD or CMOS
- the monochromatic filters 121 , 122 , 123 , 124 , 125 , and 126 may be configured in three pairs each of which consists of two filters of the same color, e.g., two red (R) filters, two green (G) filters, and two blue (B) filters which are arrayed as shown in FIG. 7 .
- the first and second lens modules 111 and 112 may be mutually complementary lens modules
- the third and fourth lens modules 113 and 114 may be mutually complementary lens modules
- the fifth and sixth lens modules 115 and 116 may be mutually complementary lens modules.
- a third group W of the first, third, and fifth lens modules 111 , 113 , and 115 may present a better resolution for short distances
- a fourth group Z of the second, fourth, and sixth lens modules 112 , 114 , and 116 may be larger in f-number than the third group W and thus present a better resolution for long distances.
- a low-resolution region may be rendered to have a higher resolution by using the lens modules with different object plane positions.
- an AI-adopted or AI-based super resolution method is used to allow the region of interest a further enhanced resolution.
- FIG. 8 is a view illustrating a configuration of a depth information producing device according to an embodiment.
- a depth information producing device 1000 may include a monochromatic stereo camera module 100 for capturing at least one monochromatic stereo image, a color camera module 400 for capturing a color image from the object, a data storage unit 200 for storing the images captured by the monochromatic stereo camera module 100 and the color camera module 400 , and a data processor 300 for processing the images stored in the data storage unit 200 .
- the same or substantially the same description given for the monochromatic stereo camera module 100 , data storage unit 200 , and data processor 300 as described above in connection with FIG. 4 may apply to the monochromatic stereo camera module 100 , the data storage unit 200 , and the data processor 100 .
- the monochromatic stereo camera module 100 may include a first image sensor, a plurality of first lens modules spaced apart from each other on the first image sensor, and a monochromatic filter disposed to allow light paths formed by at least two of the plurality of first lens modules and the first image sensor to pass therethrough.
- the monochromatic filter may be disposed ahead of the first lens modules or between the first lens modules and the first image sensor.
- the monochromatic filter, the first lens modules, and the image sensor may be aligned along light paths in the order of the monochromatic filter, the first lens modules, and the image sensor or in the order of the first lens module, the monochromatic filter, and the image sensor.
- the color camera module 400 may include a second image sensor, a second lens module disposed on the second image sensor, and a color filter disposed to allow a light path formed by the second lens module and the second image sensor to pass therethrough.
- the color filter may be disposed between the second lens module and the second image sensor.
- the color filter, the second lens module, and the second image sensor may be aligned along the light path in the order of the second lens module, the color filter, and the image sensor.
- the color camera module 400 may be a complementary camera module to the monochromatic stereo camera module 100 .
- a blurred region of a color image may differ from a blurred region of at least one of monochromatic images, or the color image may have more pixels per unit area than the monochromatic image.
- the color image may be a complementary image to the monochromatic image.
- the data processor 300 may perform a complementing process to raise the resolution of at least one of at least two monochromatic images forming a monochromatic stereo image using a color image, and stereo matching, thereby obtaining the depth of a region of interest.
- the image complementing process may be limited to the region of interest.
- a monochromatic stereo camera module 100 e.g., GG
- the color camera module 400 is smaller in f-number than the monochromatic stereo camera module 100 to allow the object plane to be positioned at a short distance, allowing the color camera module 400 to serve as a complementary camera module to the monochromatic stereo camera module 100 .
- the color camera module 400 it is possible to allow the color camera module 400 to be a complementary camera module to the monochromatic stereo camera module 100 by making the object plane of the monochromatic stereo camera module 100 positioned at a shorter distance while making the object plane of the color camera module 400 positioned farther away.
- a monochromatic stereo camera module 100 whose object plane is positioned at a short distance may be configured while allowing one of the color camera modules 400 to have the object plane positioned at a shorter distance than the monochromatic stereo camera module 100 and the other to have the object plane positioned at a longer distance than the monochromatic stereo camera module 100 .
- the two color camera modules 400 may serve as complementary camera modules to the monochromatic stereo camera module 100 .
- the first group of monochromatic stereo camera modules 100 may be rendered to have their object plane positioned at a short distance
- the second group of monochromatic stereo camera modules 100 rendered to have their object plane positioned at a longer distance
- the other color camera module 400 rendered to have its object plane at a longer distance than the one color camera module, so that the two color camera modules 400 may play a role as complementary camera modules to the two groups of monochromatic stereo camera modules 100 .
- a method for producing depth information according to an embodiment with reference to FIG. 9 .
- FIG. 9 is a flowchart illustrating a depth information producing method according to an embodiment.
- a method for producing depth information may include obtaining a plurality of images from an object O through a plurality of lens modules 111 and 112 (S 100 ), producing two complemented monochromatic images by performing a complementary image enhancing process on at least part of the two monochromatic images to thereby increase the resolution (S 200 ), selecting a region of interest of the object O from the two complemented monochromatic images (S 300 ), and calculating the depth of the region of interest by performing stereo matching on the two complemented monochromatic images (S 400 ).
- the plurality of images may include at least two monochromatic images forming a monochromatic stereo image.
- the complementary image enhancing process may compare the plurality of images with each of the two monochromatic images and raise the resolution of each of the two monochromatic images using the one image at least partially higher in resolution than the other images.
- a method for producing planar image may include obtaining a plurality of images from an object O through a plurality of lens modules 111 and 112 , wherein the plurality of images may include at least two monochromatic images forming a monochromatic stereo image (e.g., S 100 ), selecting a region of interest of the object O from the two monochromatic images (e.g., S 300 ), producing two complemented monochromatic images by a complementary image enhancing process on the region of interest of the two monochromatic images to thereby increasing the resolution (e.g., S 200 ), and calculating the depth of the region of interest by performing stereo matching on the two complemented monochromatic images (e.g., S 400 ).
- the complementary image enhancing process may compare the plurality of images with each of the two monochromatic images and raise the resolution of the region of interest of each of the two monochromatic images using the image with a higher resolution in the region of interest than the other images.
- the method according to the modification may have operation S 300 and S 200 changed in order as compared with the method described in connection with FIG. 9 .
- operation S 200 of FIG. 9 may be performed earlier than operation S 300
- operation S 300 may be performed earlier than operation S 200 .
- the two monochromatic images may at least partially include different blurred regions, and the complementary image enhancing process may increase the resolution of the blurred region of one of the two monochromatic images using a corresponding region of the other of the two monochromatic images.
- the two monochromatic images may be mutually complementary images.
- at least some 111 and 112 of the plurality of lens modules may have different f-number values.
- two lens modules 111 and 112 forming the two monochromatic images may be spaced apart from each other on the image sensor I. In this case, pixel mismatches may be significantly reduced upon calculating the distance between the positions on the image sensor I.
- the plurality of images may include at least one color image of the object O.
- a blurred region of a color image may differ from a blurred region of at least one of the two monochromatic images, or the color image may have more pixels per unit area than each of the monochromatic images.
- the complementary image enhancing process may raise the resolution of the blurred region of at least one of the two monochromatic images using the color image.
- the complementary image enhancing process may additionally use a super resolution method to increase the resolution of the region of interest.
- the super resolution method may use artificial intelligence (AI) data obtained by AI-learning a plurality of low-resolution images and a plurality of high-resolution images corresponding to the plurality of low-resolution images.
- AI artificial intelligence
- a depth information producing method and device capable of reducing blurs, raising resolution, and mitigating calibration errors and pixel mismatches.
- the embodiments of the disclosure it is possible to precisely calculate the depth of a region of interest, making them applicable to various industrial sectors or fields, such as gesture or gaze tracking or face or things recognition with raised accuracy.
Abstract
Description
Claims (14)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020170178417A KR101889886B1 (en) | 2017-12-22 | 2017-12-22 | Depth information generating method and apparatus |
KR10-2017-0178417 | 2017-12-22 |
Publications (2)
Publication Number | Publication Date |
---|---|
US20190197717A1 US20190197717A1 (en) | 2019-06-27 |
US10482620B2 true US10482620B2 (en) | 2019-11-19 |
Family
ID=63453885
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/225,297 Expired - Fee Related US10482620B2 (en) | 2017-12-22 | 2018-12-19 | Method and device for producing depth information |
Country Status (2)
Country | Link |
---|---|
US (1) | US10482620B2 (en) |
KR (1) | KR101889886B1 (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114365482A (en) * | 2019-03-25 | 2022-04-15 | 华为技术有限公司 | Large aperture blurring method based on Dual Camera + TOF |
PH12019050076A1 (en) * | 2019-05-06 | 2020-12-02 | Samsung Electronics Co Ltd | Enhancing device geolocation using 3d map data |
CA3155551C (en) * | 2019-10-26 | 2023-09-26 | Louis-Antoine Blais-Morin | Automated license plate recognition system and related method |
CN114862923B (en) * | 2022-07-06 | 2022-09-09 | 武汉市聚芯微电子有限责任公司 | Image registration method and device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009284188A (en) | 2008-05-22 | 2009-12-03 | Panasonic Corp | Color imaging apparatus |
US20110069377A1 (en) * | 2009-09-18 | 2011-03-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Planar gradient index optical metamaterials |
US20120189293A1 (en) * | 2011-01-25 | 2012-07-26 | Dongqing Cao | Imaging devices having arrays of image sensors and lenses with multiple aperture sizes |
US20140009646A1 (en) * | 2010-10-24 | 2014-01-09 | Opera Imaging B.V. | Spatially differentiated luminance in a multi-lens camera |
KR20140054797A (en) | 2012-10-29 | 2014-05-09 | 삼성전기주식회사 | Electronic device and image modification method of stereo camera image using thereof |
KR20140120527A (en) | 2013-04-03 | 2014-10-14 | 삼성전기주식회사 | Apparatus and method for matchong stereo image |
-
2017
- 2017-12-22 KR KR1020170178417A patent/KR101889886B1/en active IP Right Grant
-
2018
- 2018-12-19 US US16/225,297 patent/US10482620B2/en not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009284188A (en) | 2008-05-22 | 2009-12-03 | Panasonic Corp | Color imaging apparatus |
US20110069377A1 (en) * | 2009-09-18 | 2011-03-24 | Toyota Motor Engineering & Manufacturing North America, Inc. | Planar gradient index optical metamaterials |
US20140009646A1 (en) * | 2010-10-24 | 2014-01-09 | Opera Imaging B.V. | Spatially differentiated luminance in a multi-lens camera |
US20120189293A1 (en) * | 2011-01-25 | 2012-07-26 | Dongqing Cao | Imaging devices having arrays of image sensors and lenses with multiple aperture sizes |
KR20140054797A (en) | 2012-10-29 | 2014-05-09 | 삼성전기주식회사 | Electronic device and image modification method of stereo camera image using thereof |
KR20140120527A (en) | 2013-04-03 | 2014-10-14 | 삼성전기주식회사 | Apparatus and method for matchong stereo image |
Non-Patent Citations (3)
Title |
---|
English Specification of 10-2014-0054797. |
English Specification of 10-2014-0120527. |
English Specification of 2009-284188. |
Also Published As
Publication number | Publication date |
---|---|
KR101889886B1 (en) | 2018-08-21 |
US20190197717A1 (en) | 2019-06-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10482620B2 (en) | Method and device for producing depth information | |
US11856291B2 (en) | Thin multi-aperture imaging system with auto-focus and methods for using same | |
CN109644230B (en) | Image processing method, image processing apparatus, image pickup apparatus, and storage medium | |
US9778442B2 (en) | Optical image capturing system | |
JP5681954B2 (en) | Imaging apparatus and imaging system | |
JP5358039B1 (en) | Imaging device | |
US9703077B2 (en) | Optical image capturing system | |
US20160269710A1 (en) | Multi-aperture Depth Map Using Partial Blurring | |
US9709777B2 (en) | Optical image capturing system | |
US8976271B2 (en) | Optical system and image pickup apparatus | |
US20170010447A1 (en) | Optical image capturing system | |
US20140198231A1 (en) | Image processing apparatus, image pickup apparatus and image processing method | |
JP5344648B2 (en) | Image processing method, image processing apparatus, imaging apparatus, and image processing program | |
US8928780B2 (en) | Adjustment method, adjustment apparatus, method of manufacturing optical system, image pickup apparatus, and method of manufacturing image pickup apparatus | |
US9128274B2 (en) | Zoom lens system | |
JP7009219B2 (en) | Image processing method, image processing device, image pickup device, image processing program, and storage medium | |
WO2019235258A1 (en) | Image processing method, image processing device, imaging apparatus, program, and storage medium | |
WO2016053140A1 (en) | Mobile device with optical elements | |
JP6221399B2 (en) | Imaging system, imaging optical system, and manufacturing method of imaging system | |
US9300877B2 (en) | Optical zoom imaging systems and associated methods |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LIGHT AND MATH INC., KOREA, REPUBLIC OF Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:CHO, SUNGGOO;REEL/FRAME:047815/0964 Effective date: 20181108 |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: MICROENTITY Free format text: ENTITY STATUS SET TO MICRO (ORIGINAL EVENT CODE: MICR); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: MICROENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20231119 |