WO2016206004A1 - 一种获取深度信息的拍照设备和方法 - Google Patents
一种获取深度信息的拍照设备和方法 Download PDFInfo
- Publication number
- WO2016206004A1 WO2016206004A1 PCT/CN2015/082134 CN2015082134W WO2016206004A1 WO 2016206004 A1 WO2016206004 A1 WO 2016206004A1 CN 2015082134 W CN2015082134 W CN 2015082134W WO 2016206004 A1 WO2016206004 A1 WO 2016206004A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target object
- depth information
- distance
- mode
- binocular
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 230000007547 defect Effects 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/254—Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/257—Colour aspects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/2224—Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
- H04N5/2226—Determination of depth image, e.g. for foreground/background separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
Definitions
- the present invention relates to the field of image processing, and in particular, to a photographing apparatus and method for acquiring depth information.
- Depth map is an image representation of the distance from the focal plane to each point on the image through a grayscale map.
- FIG. 1A shows a three-dimensional view of a character
- FIG. 1B shows a depth map of the person based on FIG. 1A.
- Objects in the same color gradation indicate that they are in the same focal plane, and lighter gray indicates that the object is in focus. The closer the plane is, the darker the gray is, the further the object is from the focal plane.
- the binocular mode uses two or more cameras to simultaneously acquire images, and uses a triangulation algorithm to calculate the distance of each point on the image from the focal plane, thereby obtaining depth information.
- the binocular mode cannot obtain the depth information of the target object when the distance between the target object and the focal plane is less than a certain value.
- the fixed value is called a blind distance.
- the structured light method usually uses an infrared light source to illuminate the target object, and by projecting a specific pattern onto the target object, the depth information of each pixel point is calculated by the offset of the pattern.
- the structured light mode requires an infrared camera and an infrared light source, and the structured light mode requires a high degree of projected pattern, is susceptible to infection by the outdoor light source, and the measurement distance is affected by the illumination source, and is limited to indoor and close-range scenes. Such as within 3 meters (unit: m) within the distance. Once the target object is more than a certain distance from the focal plane, the depth information cannot be obtained.
- the ToF method measures the phase shift of the received image by modulating the phase of the infrared source.
- the depth information is measured.
- the ToF method like the structured light method, also requires an infrared camera and an infrared source, and is limited to indoor and close-range scenes.
- the embodiment of the invention provides a photographing device and a method for acquiring depth information, which are used to solve the problem that the depth information of the target object cannot be acquired in the prior art.
- the present invention provides a photographing apparatus, including:
- a first image sensor a second image sensor, an infrared light source, and a processor, wherein
- a first image sensor configured to collect an infrared light image and a visible light image
- the first image sensor includes M infrared light sensing pixels and N visible light sensing pixels;
- a second image sensor for collecting visible light images
- the infrared light source for projecting a specific pattern onto the target object
- the processor is configured to: when the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode, use the binocular method to acquire the depth information of the target object; When the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, the depth information of the target object is acquired by using the structured light mode.
- the binocular method when the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode, the binocular method is used to obtain Depth information of the target object, when the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, obtaining the depth information of the target object by using the structured light manner, including :
- the depth information of the target object cannot be obtained by using the binocular mode
- the depth information of the target object is acquired by using the structured light mode.
- the binocular method when the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode, the binocular method is used to obtain Depth information of the target object, when the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, obtaining the depth information of the target object by using the structured light manner, including :
- the depth information of the target object cannot be acquired by using the structured light mode
- the depth information of the target object is acquired by using the binocular mode.
- the information between the target object and the photographing device When the distance is greater than the effective working distance of the structured light mode, the depth information of the target object is obtained by using a binocular mode, and when the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, Obtaining the depth information of the target object by using the structured light manner, including:
- the distance between the target object and the photographing device is greater than a first preset value, acquiring the depth information of the target object by using the binocular method;
- the depth information of the target object is acquired by using the structured light manner.
- the acquiring the target using a binocular manner The depth information of the object, including:
- the pixel value of the visible light image is equal to X
- the infrared light image pixel value is equal to Y
- the second reference image is a visible light image having a pixel value equal to X+Y;
- the acquiring the target using a structured light including:
- the M is equal to the 3.
- the effective distance of the binocular mode is one meter.
- the present invention provides a method for acquiring depth information, including:
- the binocular mode is used to acquire the depth information of the target object
- the depth information of the target object is acquired by using the structured light mode.
- the binocular method when the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode, the binocular method is used to obtain Depth information of the target object, when the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, obtaining the depth information of the target object by using the structured light manner, including :
- the knot is used.
- the constitutive mode acquires depth information of the target object.
- the binocular method when the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode, the binocular method is used to obtain Depth information of the target object, when the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, obtaining the depth information of the target object by using the structured light manner, including :
- the depth information of the target object cannot be acquired by using the structured light mode
- the depth information of the target object is acquired by using the binocular mode.
- the method between the target object and the photographing device When the distance is greater than the effective working distance of the structured light mode, the depth information of the target object is obtained by using a binocular mode, and when the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, Obtaining the depth information of the target object by using the structured light manner, including:
- the distance between the target object and the photographing device is greater than a first preset value, acquiring the depth information of the target object by using the binocular method;
- the depth information of the target object is acquired by using the structured light manner.
- the obtaining the target by using a binocular manner The depth information of the object, including:
- the pixel value of the visible light image is equal to X
- the pixel value of the infrared light image is equal to Y
- the acquiring, by using the structured optical manner, the depth information of the target object includes:
- the first image sensor includes M infrared light sensing pixels and N visible light sensing pixels , M is equal to 1/3 of the N.
- the effective distance of the binocular mode is one meter.
- a camera device including:
- a first acquiring unit configured to acquire depth information of the target object by using a binocular mode when a distance between the target object and the photographing device is greater than an effective working distance of the structured light mode
- a second acquiring unit configured to acquire depth information of the target object by using the structured light manner when a distance between the target object and the photographing device is less than an effective working distance of the binocular mode.
- the first acquiring unit acquires depth information of the target object by using the binocular mode
- the second acquiring unit acquires the depth information of the target object by using the structured light mode.
- the second acquiring unit acquires depth information of the target object by using the structured light manner
- the first acquiring unit acquires the depth information of the target object by using the binocular mode.
- the photographing device further includes:
- a measuring unit configured to measure a distance between the target object and the photographing device
- the first acquiring unit acquires depth information of the target object by using the binocular mode
- the second acquiring unit acquires depth information of the target object by using the structured light manner.
- the first acquiring unit is specifically configured to:
- the pixel value of the visible light image is equal to X
- the pixel value of the infrared light image is equal to Y
- the second acquiring unit is specifically configured to:
- the first image sensor includes M infrared light sensing pixels and N visible light sensing pixels , M is equal to 1/3 of the N.
- the effective distance of the binocular mode is one meter.
- the solution provided by the invention avoids the defect that the structured light algorithm cannot obtain the depth information of the target object at a long distance under the interference of the outdoor sunlight or the natural light source by combining the binocular mode and the structured light mode, and avoids the binocular mode. It is impossible to acquire the defect of the depth information of the target object in the close blind area, and it is possible to simultaneously acquire the depth information of the full depth of the target object.
- 1A is a schematic view of a three-dimensional view of the prior art
- 1B is a schematic diagram of a depth map in the prior art
- FIG. 2 is a schematic diagram of a camera device according to an embodiment of the present invention.
- FIG. 3 is a schematic diagram of a novel sensor according to an embodiment of the present invention.
- FIG. 4 is a schematic diagram of a sensor of an RGB color mode according to an embodiment of the present invention.
- FIG. 5 is a flowchart of acquiring depth information according to an embodiment of the present invention.
- FIG. 6 is a schematic structural diagram of a photographing apparatus according to an embodiment of the present invention.
- the embodiment of the invention provides a photographing device and a method for acquiring depth information.
- the structured light algorithm is prevented from acquiring the target object at a long distance under the interference of the outdoor sunlight or the natural light source.
- the defect of the depth information avoids the defect that the binocular mode cannot obtain the depth information of the target object in the close blind zone, and can simultaneously acquire the depth information of the full depth of the target object.
- an embodiment of the present invention provides a photographing apparatus, which includes a first image sensor 21, a second image sensor 22, an infrared light source 23, and a processor 24, wherein the components pass one or more Communication buses or signal lines 25 are used for communication.
- the terminal device for wake-up of the fingerprint interrupt provided in this embodiment is described in detail below.
- the first image sensor 21 is a novel sensor for collecting infrared light images and visible light images according to an embodiment of the present invention; the first image sensor includes M infrared light sensing pixels and N visible light sensing pixels.
- the infrared light sensing pixel and the visible light sensing pixel need to be evenly distributed.
- a pixel, and the position of the infrared light-sensing pixel in the four pixels is not limited in the embodiment of the present invention, and one of the pixel arrangements may be as shown in FIG. 3, in each of the areas composed of 4 pixels.
- the dark color block in the lower left corner represents the infrared light sensing pixel, and the white color block in the remaining position represents the visible light sensing pixel.
- the infrared light sensing pixel M and the visible light sensing pixel N in the first image sensor 21 proposed by the embodiment of the present invention may be any ratio.
- the embodiment of the present invention finds that the binocular mode and the structured light can be better balanced when the pixel value M of the infrared light sensing pixel is equal to 1/3 of the pixel value N of the visible light image sensing pixel. The accuracy of the depth information obtained by the method.
- the second image sensor 22 is configured to collect a visible light image.
- an RGB color image sensing pixel of the second image sensor 22 is shown. example.
- the embodiment of the present invention finds that the effective distance of the binocular mode can be adjusted by adjusting the layout distance of the above two sensors through multiple simulation experiments. Set to one meter.
- the infrared light source 23 is configured to project a specific pattern onto the target object.
- the processor 24 is configured to perform the following operations:
- the binocular mode is used to acquire the depth information of the target object; when the target object and the photographing device are When the distance is smaller than the effective working distance of the binocular mode, the depth information of the target object is acquired by using the structured light mode.
- the effective working distance of the structured light mode refers to the maximum effective range of the structured light mode
- the effective working distance of the so-called binocular mode refers to the minimum effective range of the binocular mode.
- the processor may first obtain the depth information of the target object by using the binocular mode; if the depth information of the target object cannot be obtained by using the binocular mode, the structured light mode is used. Obtaining depth information of the target object.
- the processor may first obtain the depth information of the target object by using the structured light mode; if the depth information of the target object cannot be obtained by using the structured light mode, the binocular mode is used. Obtaining depth information of the target object.
- the processor may measure a distance between the target object and the photographing device; if the distance between the target object and the photographing device is greater than a first preset value, Obtaining the depth information of the target object in the binocular mode; if the distance between the target object and the photographing device is less than a second preset value, acquiring the depth of the target object by using the structured light manner information.
- the processor 24 may acquire, by using the first image sensor 21, a visible light image and an infrared light image of the target object, where the visible light The pixel value of the image is equal to X, and the pixel value of the infrared light image is equal to Y; Obtaining a first reference image whose pixel value is equal to X+Y according to the visible light image and the infrared light image; and acquiring, by the second image sensor 22, a second reference image of the target object, the second reference The image is a visible light image with a pixel value equal to X+Y; finally, depth information of the target object is calculated by a triangulation algorithm according to the first reference image and the second reference image.
- the first image sensor 21 includes infrared light-sensing pixels
- the first image sensor 21 can be applied to the structured light mode together with the infrared light source 23, and is responsible for collecting the close-range range supported by the structured light mode.
- Depth information of the target object and since the first image sensor 21 also includes visible light sensing pixels, the first image sensor 21 can also be applied to the binocular mode together with the second image sensor 22, and is responsible for the collection The depth information of the target object in the long range supported by the binocular mode.
- the embodiment of the present invention can realize the acquisition of the depth information of the target object in the range of as short as several millimeters and as long as several tens of meters by the above one infrared light source and two sensors.
- an embodiment of the present invention provides an implementation process of acquiring depth information by the camera device:
- Step 501 When the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode, the depth information of the target object is obtained by using a binocular mode.
- the specific manner in which the photographing device acquires the depth information of the target object by using a binocular method is as follows:
- the photographing device acquires a visible light image and an infrared light image of the target object through a first image sensor, the pixel value of the visible light image is equal to X, and the pixel value of the infrared light image is equal to Y;
- the photographing device obtains a first reference image whose pixel value is equal to X+Y according to the visible light image and the infrared light image.
- the photographing apparatus may obtain the first reference image by an interpolation compensation method. Since the first image contains an infrared light image that is meaningless to the binocular mode, the infrared light image needs to be taken out, and according to the interpolation compensation method and the visible light image whose pixel value is equal to X, The brightness of the pixel in which the original infrared light image is located is estimated, thereby obtaining a visible light image having a pixel value equal to X+Y.
- the photographing device acquires a second reference image of the target object by using a second image sensor, where the second reference image is a visible light image with a pixel value equal to X+Y.
- the photographing device calculates the depth information of the target object by using a triangulation algorithm according to the first reference image and the second reference image.
- Step 502 When the distance between the target object and the photographing device is less than the effective working distance of the binocular mode, the depth information of the target object is acquired by using a structured light manner.
- the specific manner in which the photographing device acquires the depth information of the target object by using a structured light manner is as follows:
- the photographing device projects a specific pattern onto the target object through an infrared light source.
- the photographing device acquires, by the first image sensor, a reference pattern formed by depth modulation of the target object after the specific pattern is projected onto the target object.
- the photographing device calculates depth information of the target object according to the reference pattern and the initial specific pattern.
- the embodiment of the present invention combines the binocular mode and the structured light mode, in theory. At least 3 sensors are required. However, the embodiment of the present invention integrates the infrared light sensing pixel and the visible light sensing pixel into a new type of sensor, so that the new sensor can simultaneously have the functions of the traditional sensor and the infrared sensor, so that only two sensors can simultaneously The binocular mode and the structured light mode are activated to obtain depth information of the full depth of the target object.
- the camera device may first obtain the depth information of the target object by using the binocular mode; if the binocular mode is used, the The depth information of the target object is used to acquire the depth information of the target object.
- the photographing device may first acquire the target object by using the structured light method. Depth information; if the depth information of the target object cannot be obtained by using the structured light mode, the depth information of the target object is obtained by using the binocular mode.
- the photographing device may measure a distance between the target object and the photographing device; if the distance between the target object and the photographing device is greater than a first preset value, Obtaining the depth information of the target object in the binocular mode; if the distance between the target object and the photographing device is less than a second preset value, acquiring the depth of the target object by using the structured light manner information.
- the photographing device may measure a distance between the photographing device and the target object by laser ranging, radar ranging or other ranging methods.
- the first preset value and the second preset value may or may not be equal.
- the infrared light sensing pixel M and the visible light sensing pixel N in the novel sensor proposed by the embodiment of the present invention may be any ratio.
- the embodiment of the present invention finds that the binocular mode and the structured light mode can be better balanced when the pixel value M of the infrared light sensing pixel is equal to 1/3 of the pixel value N of the visible light sensing pixel. The accuracy of the obtained depth information.
- a maximum value of the effective range of the structured light mode may be set to a minimum value of the effective range of the binocular mode.
- a photographing device can be used to obtain depth information of a target object in the range of A to B (A ⁇ B) when using the structured light mode, and can be used to obtain a range of C to D (C ⁇ D) when using the binocular mode.
- the depth information of the target object can be adjusted by the power of the infrared projector of the photographing device, so that the B is equal to the C.
- the photographing device can obtain A ⁇ B/C by combining the structured light algorithm and the binocular algorithm.
- the depth information of the scene in the range of D wherein A ⁇ B/C is the blind spot area of the binocular mode, and B/C ⁇ D is the depth zone which cannot be detected by the structured light mode.
- the embodiment of the present invention finds that the effective working distance of the binocular mode can be set to one meter by adjusting the layout distance of the two sensors.
- an embodiment of the present invention provides a photographing device, which is used to implement a method for acquiring depth information, which is shown in FIG. 5 of the present invention.
- the photographing device includes:
- the first obtaining unit 601 is configured to acquire depth information of the target object by using a binocular mode when the distance between the target object and the photographing device is greater than the effective working distance of the structured light mode.
- the second obtaining unit 602 is configured to acquire depth information of the target object by using the structured light manner when a distance between the target object and the photographing device is less than an effective working distance of the binocular mode.
- the first acquiring unit 601 may first obtain the depth information of the target object by using the binocular mode; if the first acquiring unit 601 uses the binocular mode, the depth of the target object cannot be acquired.
- the second acquisition unit 602 obtains the depth information of the target object by using the structured light mode.
- the second obtaining unit 602 may first obtain the depth information of the target object by using the structured light mode; if the second acquiring unit 602 uses the structured light mode, the depth of the target object cannot be acquired.
- the first acquisition unit 601 obtains the depth information of the target object by using the binocular mode.
- the camera device may further include:
- a measuring unit configured to measure a distance between the target object and the photographing device.
- the first acquiring unit 601 acquires depth information of the target object by using the binocular mode. ;
- the second acquiring unit 602 acquires the depth information of the target object by using the structured light manner. .
- the first acquiring unit 601 is configured to: acquire, by using the first image sensor, a visible light image and an infrared light image of the target object, where the pixel value of the visible light image is equal to X, The pixel value of the infrared light image is equal to Y; obtaining a first reference image whose pixel value is equal to X+Y according to the visible light image and the infrared light image; acquiring a second reference of the target object by using the second image sensor An image, the second reference image is a visible light image having a pixel value equal to X+Y; and according to the first reference image and the second reference image, depth information of the target object is calculated by a triangulation algorithm.
- the second acquiring unit 602 is specifically configured to: project a specific pattern onto the target object by using an infrared light source; and acquire, by using the first image sensor, the specific pattern is projected onto the target object a reference pattern formed by depth modulation of the target object; and calculating depth information of the target object according to the reference pattern.
- the first image sensor includes M infrared light sensing pixels and N visible light sensing pixels, and the M is equal to 1/3 of the N.
- the effective distance of the binocular mode is one meter.
- the technical solution provided by the embodiment of the present invention provides a novel sensor including an infrared light sensing pixel and a visible light sensing pixel, and an infrared light source and another sensor can realize binocular mode and structured light.
- the combination of the modes avoids the defect that the structured light algorithm cannot obtain the depth information of the target object at a long distance under the interference of the outdoor sunlight or the natural light source, and avoids the depth of the target object in the blind zone which cannot be obtained by the binocular mode.
- the defect of the information can simultaneously acquire the depth information of the scene in the range of as short as several millimeters and as long as several tens of meters.
- embodiments of the present invention can be provided as a method, system, or computer program product.
- the present invention can be implemented in the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware.
- the invention can be embodied in the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
- the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
- the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
- These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
- the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Optics & Photonics (AREA)
- Electromagnetism (AREA)
- Human Computer Interaction (AREA)
- Measurement Of Optical Distance (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (24)
- 一种拍照设备,其特征在于,包括:第一图像传感器、第二图像传感器、红外光源和处理器,其中,第一图像传感器,用于采集红外光图像和可见光图像;所述第一图像传感器包括M个红外光感应像素和N个可见光感应像素;第二图像传感器,用于采集可见光图像;所述红外光源,用于投射特定的图案到目标对象上;所述处理器,用于执行以下操作:当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息;当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求1所述的拍照设备,其特征在于,所述当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息,当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息,包括:使用所述双目方式获取所述目标对象的深度信息;若使用所述双目方式不能获取所述目标对象的深度信息,则使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求1所述的拍照设备,其特征在于,所述当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息,当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息,包括:使用所述结构光方式获取所述目标对象的深度信息;若使用所述结构光方式不能获取所述目标对象的深度信息,则使用所述 双目方式获取所述目标对象的深度信息。
- 如权利要求1-3中任一项所述的拍照设备,其特征在于,所述当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息,当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息,包括:测量所述目标对象与所述拍照设备之间的距离;若所述所述目标对象与所述拍照设备之间的距离大于第一预设值,则使用所述双目方式获取所述目标对象的深度信息;若所述所述目标对象与所述拍照设备之间的距离小于第二预设值,则使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求1-4中任一项所述的拍照设备,其特征在于,所述使用双目方式获取所述目标对象的深度信息,包括:通过所述第一图像传感器,获取所述目标对象的可见光图像和红外光图像,所述可见光图像的像素值等于X,所述红外光图像像素值等于Y;根据所述可见光图像和所述红外光图像获得像素值等于X+Y的第一参考图像;通过所述第二图像传感器,获取所述目标对象的第二参考图像,所述第二参考图像为像素值等于X+Y的可见光图像;根据所述第一参考图像和所述第二参考图像,通过三角定位算法,计算所述目标对象的深度信息。
- 如权利要求1-5中任一项所述的拍照设备,其特征在于,所述使用结构光方式获取所述目标对象的深度信息,包括:通过所述红外光源,投射特定图案到所述目标对象上;通过所述第一图像传感器,获取所述特定图案被投射到所述目标对象上后被所述目标对象的深度调制形成的参考图案;根据所述参考图案,计算所述目标对象的深度信息。
- 如权利要求1-6中任一项所述的拍照设备,其特征在于,所述M等于所述N的1/3。
- 如权利要求1-7中任一项所述的拍照设备,其特征在于,所述双目方式的有效作用距离为一米。
- 一种深度信息获取方法,其特征在于,包括;当目标对象与拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息;当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求9所述的方法,其特征在于,所述当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息,当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息,包括:使用所述双目方式获取所述目标对象的深度信息;若使用所述双目方式不能获取所述目标对象的深度信息,则使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求9所述的方法,其特征在于,所述当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息,当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息,包括:使用所述结构光方式获取所述目标对象的深度信息;若使用所述结构光方式不能获取所述目标对象的深度信息,则使用所述双目方式获取所述目标对象的深度信息。
- 如权利要求9-11中任一项所述的方法,其特征在于,所述当所述目标对象与所述拍照设备之间的距离大于结构光方式的有效作用距离时,使用 双目方式获取所述目标对象的深度信息,当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息,包括:测量所述目标对象与所述拍照设备之间的距离;若所述所述目标对象与所述拍照设备之间的距离大于第一预设值,则使用所述双目方式获取所述目标对象的深度信息;若所述所述目标对象与所述拍照设备之间的距离小于第二预设值,则使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求9-12中任一项所述的方法,其特征在于,所述使用双目方式获取所述目标对象的深度信息,包括:通过第一图像传感器,获取所述目标对象的可见光图像和红外光图像,所述可见光图像的像素值等于X,所述红外光图像的像素值等于Y;根据所述可见光图像和所述红外光图像获得像素值等于X+Y的第一参考图像;通过第二图像传感器,获取所述目标对象的第二参考图像,所述第二参考图像为像素值等于X+Y的可见光图像;根据所述第一参考图像和所述第二参考图像,通过三角定位算法,计算所述目标对象的深度信息。
- 如权利要求13所述的方法,其特征在于,所述使用结构光方式获取所述目标对象的深度信息,包括:通过红外光源,投射特定图案到所述目标对象上;通过所述第一图像传感器,获取所述特定图案被投射到所述目标对象上后被所述目标对象的深度调制形成的参考图案;根据所述参考图案,计算所述目标对象的深度信息。
- 如权利要求13或14所述的方法,其特征在于,所述第一图像传感器包括M个红外光感应像素和N个可见光感应像素,所述M等于所述N的1/3。
- 如权利要求9-15中任一项所述的方法,其特征在于,所述双目方式的有效作用距离为一米。
- 一种拍照设备,其特征在于,包括;第一获取单元,用于当目标对象与拍照设备之间的距离大于结构光方式的有效作用距离时,使用双目方式获取所述目标对象的深度信息;第二获取单元,用于当所述目标对象与所述拍照设备之间的距离小于所述双目方式的有效作用距离时,使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求17所述的拍照设备,其特征在于,所述第一获取单元使用所述双目方式获取所述目标对象的深度信息;若所述第一获取单元使用所述双目方式不能获取所述目标对象的深度信息,则所述第二获取单元使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求17所述的拍照设备,其特征在于,所述第二获取单元使用所述结构光方式获取所述目标对象的深度信息;若所述第二获取单元使用所述结构光方式不能获取所述目标对象的深度信息,则所述第一获取单元使用所述双目方式获取所述目标对象的深度信息。
- 如权利要求17-19中任一项所述的拍照设备,其特征在于,所述拍照设备还包括:测量单元,用于测量所述目标对象与所述拍照设备之间的距离;若所述测量单元确定所述所述目标对象与所述拍照设备之间的距离大于第一预设值,则所述第一获取单元使用所述双目方式获取所述目标对象的深度信息;若所述测量单元确定所述所述目标对象与所述拍照设备之间的距离小于第二预设值,则所述第二获取单元使用所述结构光方式获取所述目标对象的深度信息。
- 如权利要求17-20中任一项所述的拍照设备,其特征在于,所述第一获取单元具体用于:通过第一图像传感器,获取所述目标对象的可见光图像和红外光图像,所述可见光图像的像素值等于X,所述红外光图像的像素值等于Y;根据所述可见光图像和所述红外光图像获得像素值等于X+Y的第一参考图像;通过第二图像传感器,获取所述目标对象的第二参考图像,所述第二参考图像为像素值等于X+Y的可见光图像;根据所述第一参考图像和所述第二参考图像,通过三角定位算法,计算所述目标对象的深度信息。
- 如权利要求21所述的拍照设备,其特征在于,所述第二获取单元具体用于:通过红外光源,投射特定图案到所述目标对象上;通过所述第一图像传感器,获取所述特定图案被投射到所述目标对象上后被所述目标对象的深度调制形成的参考图案;根据所述参考图案,计算所述目标对象的深度信息。
- 如权利要求21或22所述的拍照设备,其特征在于,所述第一图像传感器包括M个红外光感应像素和N个可见光感应像素,所述M等于所述N的1/3。
- 如权利要求17-23中任一项所述的拍照设备,其特征在于,所述双目方式的有效作用距离为一米。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2017566654A JP2018522235A (ja) | 2015-06-23 | 2015-06-23 | 撮影デバイス及び奥行き情報を取得するための方法 |
US15/739,600 US10560686B2 (en) | 2015-06-23 | 2015-06-23 | Photographing device and method for obtaining depth information |
KR1020187001258A KR20180018736A (ko) | 2015-06-23 | 2015-06-23 | 촬영 디바이스 및 깊이 정보를 취득하는 방법 |
CN201580021737.4A CN106576159B (zh) | 2015-06-23 | 2015-06-23 | 一种获取深度信息的拍照设备和方法 |
PCT/CN2015/082134 WO2016206004A1 (zh) | 2015-06-23 | 2015-06-23 | 一种获取深度信息的拍照设备和方法 |
EP15895904.9A EP3301913A4 (en) | 2015-06-23 | 2015-06-23 | Photographing device and method for acquiring depth information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2015/082134 WO2016206004A1 (zh) | 2015-06-23 | 2015-06-23 | 一种获取深度信息的拍照设备和方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016206004A1 true WO2016206004A1 (zh) | 2016-12-29 |
Family
ID=57586138
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2015/082134 WO2016206004A1 (zh) | 2015-06-23 | 2015-06-23 | 一种获取深度信息的拍照设备和方法 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10560686B2 (zh) |
EP (1) | EP3301913A4 (zh) |
JP (1) | JP2018522235A (zh) |
KR (1) | KR20180018736A (zh) |
CN (1) | CN106576159B (zh) |
WO (1) | WO2016206004A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108495113A (zh) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | 用于双目视觉系统的控制方法和装置 |
CN109194856A (zh) * | 2018-09-30 | 2019-01-11 | Oppo广东移动通信有限公司 | 电子装置的控制方法和电子装置 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110136183B (zh) * | 2018-02-09 | 2021-05-18 | 华为技术有限公司 | 一种图像处理的方法、装置以及摄像装置 |
CN108769649B (zh) * | 2018-06-28 | 2019-08-23 | Oppo广东移动通信有限公司 | 深度处理器和三维图像设备 |
EP3751849A4 (en) | 2018-06-28 | 2021-03-31 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | DEPTH PROCESSOR AND THREE-DIMENSIONAL IMAGE DEVICE |
CN109190484A (zh) * | 2018-08-06 | 2019-01-11 | 北京旷视科技有限公司 | 图像处理方法、装置和图像处理设备 |
CN109005348A (zh) * | 2018-08-22 | 2018-12-14 | Oppo广东移动通信有限公司 | 电子装置和电子装置的控制方法 |
CN112066907B (zh) * | 2019-06-11 | 2022-12-23 | 深圳市光鉴科技有限公司 | 深度成像装置 |
CN112068144B (zh) * | 2019-06-11 | 2022-10-21 | 深圳市光鉴科技有限公司 | 光投射系统及3d成像装置 |
CN111866490A (zh) * | 2020-07-27 | 2020-10-30 | 支付宝(杭州)信息技术有限公司 | 深度图像成像系统和方法 |
CN114119696A (zh) * | 2021-11-30 | 2022-03-01 | 上海商汤临港智能科技有限公司 | 深度图像的获取方法及装置、系统、计算机可读存储介质 |
CN117560480A (zh) * | 2024-01-09 | 2024-02-13 | 荣耀终端有限公司 | 一种图像深度估计方法及电子设备 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102385237A (zh) * | 2010-09-08 | 2012-03-21 | 微软公司 | 基于结构化光和立体视觉的深度相机 |
CN103796004A (zh) * | 2014-02-13 | 2014-05-14 | 西安交通大学 | 一种主动结构光的双目深度感知方法 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7663502B2 (en) * | 1992-05-05 | 2010-02-16 | Intelligent Technologies International, Inc. | Asset system control arrangement and method |
EP1751495A2 (en) * | 2004-01-28 | 2007-02-14 | Canesta, Inc. | Single chip red, green, blue, distance (rgb-z) sensor |
JP2008511080A (ja) * | 2004-08-23 | 2008-04-10 | サーノフ コーポレーション | 融合画像を形成するための方法および装置 |
WO2009046268A1 (en) * | 2007-10-04 | 2009-04-09 | Magna Electronics | Combined rgb and ir imaging sensor |
EP2678835B1 (en) | 2011-02-21 | 2017-08-09 | Stratech Systems Limited | A surveillance system and a method for detecting a foreign object, debris, or damage in an airfield |
CN103477186B (zh) * | 2011-04-07 | 2016-01-27 | 松下知识产权经营株式会社 | 立体摄像装置 |
US20130083008A1 (en) * | 2011-09-30 | 2013-04-04 | Kevin A. Geisner | Enriched experience using personal a/v system |
JP2013156109A (ja) * | 2012-01-30 | 2013-08-15 | Hitachi Ltd | 距離計測装置 |
EP2869263A1 (en) | 2013-10-29 | 2015-05-06 | Thomson Licensing | Method and apparatus for generating depth map of a scene |
CN103971405A (zh) * | 2014-05-06 | 2014-08-06 | 重庆大学 | 一种激光散斑结构光及深度信息的三维重建方法 |
CN204392458U (zh) * | 2015-02-05 | 2015-06-10 | 吉林纪元时空动漫游戏科技股份有限公司 | 采用双目立体视觉方法获取三维立体视觉的系统 |
-
2015
- 2015-06-23 CN CN201580021737.4A patent/CN106576159B/zh active Active
- 2015-06-23 JP JP2017566654A patent/JP2018522235A/ja active Pending
- 2015-06-23 WO PCT/CN2015/082134 patent/WO2016206004A1/zh active Application Filing
- 2015-06-23 KR KR1020187001258A patent/KR20180018736A/ko not_active Application Discontinuation
- 2015-06-23 US US15/739,600 patent/US10560686B2/en active Active
- 2015-06-23 EP EP15895904.9A patent/EP3301913A4/en not_active Ceased
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102385237A (zh) * | 2010-09-08 | 2012-03-21 | 微软公司 | 基于结构化光和立体视觉的深度相机 |
CN103796004A (zh) * | 2014-02-13 | 2014-05-14 | 西安交通大学 | 一种主动结构光的双目深度感知方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3301913A4 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108495113A (zh) * | 2018-03-27 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | 用于双目视觉系统的控制方法和装置 |
CN109194856A (zh) * | 2018-09-30 | 2019-01-11 | Oppo广东移动通信有限公司 | 电子装置的控制方法和电子装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3301913A4 (en) | 2018-05-23 |
CN106576159A (zh) | 2017-04-19 |
JP2018522235A (ja) | 2018-08-09 |
US10560686B2 (en) | 2020-02-11 |
US20180184071A1 (en) | 2018-06-28 |
EP3301913A1 (en) | 2018-04-04 |
KR20180018736A (ko) | 2018-02-21 |
CN106576159B (zh) | 2018-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2016206004A1 (zh) | 一种获取深度信息的拍照设备和方法 | |
WO2018161877A1 (zh) | 处理方法、处理装置、电子装置和计算机可读存储介质 | |
CN109477710B (zh) | 基于点的结构化光系统的反射率图估计 | |
CN100442141C (zh) | 图像投影方法和设备 | |
US9392262B2 (en) | System and method for 3D reconstruction using multiple multi-channel cameras | |
US10475237B2 (en) | Image processing apparatus and control method thereof | |
US8988317B1 (en) | Depth determination for light field images | |
EP2848003B1 (en) | Method and apparatus for acquiring geometry of specular object based on depth sensor | |
US11375165B2 (en) | Image calibration for projected images | |
CN107734267B (zh) | 图像处理方法和装置 | |
US9064178B2 (en) | Edge detection apparatus, program and method for edge detection | |
CN107607957B (zh) | 一种深度信息获取系统及方法、摄像模组和电子设备 | |
CN108779978B (zh) | 深度感测系统和方法 | |
CN107734264B (zh) | 图像处理方法和装置 | |
EP3381015B1 (en) | Systems and methods for forming three-dimensional models of objects | |
CN107504917A (zh) | 一种三维尺寸测量方法及装置 | |
JP7140209B2 (ja) | 検出装置、情報処理装置、検出方法、及び情報処理プログラム | |
Beltran et al. | A comparison between active and passive 3d vision sensors: Bumblebeexb3 and Microsoft Kinect | |
JP2003185412A (ja) | 画像取得装置および画像取得方法 | |
CN107392955B (zh) | 一种基于亮度的景深估算装置及方法 | |
CN104457709B (zh) | 一种距离检测方法及电子设备 | |
WO2019198446A1 (ja) | 検出装置、検出方法、情報処理装置、及び情報処理プログラム | |
JP2020129187A (ja) | 外形認識装置、外形認識システム及び外形認識方法 | |
JP6565513B2 (ja) | 色補正装置、色補正方法及び色補正用コンピュータプログラム | |
JP2019129469A (ja) | 画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15895904 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017566654 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15739600 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2015895904 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20187001258 Country of ref document: KR Kind code of ref document: A |