WO2018161466A1 - 深度图像获取系统和方法 - Google Patents

深度图像获取系统和方法 Download PDF

Info

Publication number
WO2018161466A1
WO2018161466A1 PCT/CN2017/089036 CN2017089036W WO2018161466A1 WO 2018161466 A1 WO2018161466 A1 WO 2018161466A1 CN 2017089036 W CN2017089036 W CN 2017089036W WO 2018161466 A1 WO2018161466 A1 WO 2018161466A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
image
optical
depth image
image acquisition
Prior art date
Application number
PCT/CN2017/089036
Other languages
English (en)
French (fr)
Inventor
黄源浩
肖振中
刘龙
许星
Original Assignee
深圳奥比中光科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳奥比中光科技有限公司 filed Critical 深圳奥比中光科技有限公司
Publication of WO2018161466A1 publication Critical patent/WO2018161466A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to the field of optical projection and measurement technologies, and in particular, to a depth image acquisition system and method.
  • the depth camera can be used to obtain depth images of objects, and can further perform 3D modeling, skeleton extraction, etc., and has a wide range of applications in 3D measurement and human-computer interaction.
  • structured light depth camera is the most widely used due to its low cost and high imaging resolution.
  • the depth camera has a limited measurement range, and the measurement accuracy decreases exponentially with the measurement distance; currently, the depth image of a depth camera composed of a single projection module plus a single imaging camera often has a shadow area.
  • the depth image acquired by the depth camera has these problems, which have a negative impact on the application of the depth camera, especially for applications with high requirements such as measurement range and measurement accuracy.
  • the present invention provides a depth image acquisition system and method in order to solve the problem that the shadow region of the depth information cannot be acquired in the prior art and the measurement accuracy increases sharply with the measurement distance.
  • the present invention adopts the following technical solutions:
  • a depth image acquisition system comprising:
  • An optical projection unit comprising at least two optical projectors; the at least two optical projectors for emitting structured light images of respective wavelengths;
  • An image acquisition unit comprising a filter and an image sensor; the filter comprising at least two filter units respectively allowing light emitted by the at least two optical projectors; the image sensor for receiving a pass Converting the light of the filter into an optical image and transmitting the optical image to the processor unit;
  • a processor unit for receiving the optical image and calculating the depth image.
  • a storage unit is further included for storing the depth image.
  • said processor unit comprises: one or more processors; a memory; and one or more programs stored in said memory and configured to be executed by said one or more processors,
  • the program includes instructions for performing the steps of: receiving the optical image; calculating a structured light image corresponding to the at least two projectors; calculating a correspondence using the at least two structured light images Depth image.
  • the processor unit is further configured to control a projection of the optical projection unit and/or the image acquisition unit performs image acquisition.
  • the at least two structured light images differ in at least one aspect in terms of wavelength, light intensity, and pattern density.
  • the at least two optical projectors are disposed in the same plane as the image acquisition unit; the distance between the at least two optical projectors and the image acquisition unit is different.
  • the optical projector source is a VCSEL array laser.
  • S1 using at least two optical projectors of the optical projection unit to respectively emit structured light images of respective wavelengths to the target space;
  • S2 acquiring an optical image by using an image acquisition unit and transmitting the optical image to a processor unit;
  • S3 receiving the optical image by using a processor unit and performing calculation to acquire the depth image.
  • the method for acquiring a depth image in step S3 comprises calculating a depth value of each pixel by using a trigonometric principle.
  • acquiring the depth image in step S3 comprises the processor unit fusing the at least two depth images to obtain a merged depth image.
  • the merging comprises: using any one of the at least two depth images as a reference depth image, and replacing the reference with an effective depth value in the remaining depth image of the at least two depth images A corresponding depth value in the depth image, the effective depth value referring to a depth value on a pixel in the reference depth image that is not a hole in the remaining depth image.
  • the merging comprises: weighting and averaging pixel values of corresponding pixel values in the at least two depth images as pixel values of the fused depth image.
  • the fusing comprises: calculating pixel values of sub-pixels by using the corresponding pixel values in the at least two depth images to improve resolution of the depth image.
  • a computer readable storage medium storing a computer program for use with a depth image acquisition device, the computer program being executed by a processor of any of the methods described above.
  • the invention has the beneficial effects of providing a depth image acquisition system for transmitting structured light images of at least two wavelengths, and realizing synchronous acquisition of images of different wavelengths by using an image acquisition unit.
  • the processor unit acquires the optical image and processes the depth image without parallax, and the depth image may respectively correspond to the depth image of different angles to eliminate the shadow problem generated by the single depth image, or may respectively correspond to the depth image of different distances to achieve more Measurement of large depth ranges.
  • Embodiment 1 is a schematic diagram of an image acquisition system according to Embodiment 1 of the present invention placed in a mobile device.
  • FIG. 2 is a schematic diagram of a depth image acquisition system according to Embodiment 2 of the present invention.
  • Figure 3 is a schematic diagram of an image acquisition unit of Embodiments 1 and 2 of the present invention.
  • FIG. 4 is a schematic diagram of a filter unit of an image acquisition unit according to Embodiment 3 of the present invention.
  • FIG. 5 is a schematic diagram of a process of processing an image by a processor unit according to Embodiment 4 of the present invention.
  • FIG. 6 is a schematic diagram of a method of acquiring a depth image according to Embodiments 1, 2, 3, and 4 of the present invention.
  • 1-first optical projector 2-image acquisition unit, 21-filter unit, 22-image sensor unit, 3-second optical projector, 4-mobile device, 5-processor unit, 6- Light, 7-lens.
  • FIG. 1 it is a schematic diagram of an image acquisition system according to an embodiment of the present invention placed in a mobile device, which is a specific application of the depth image acquisition system of the present invention as a built-in unit of a mobile device.
  • the depth image acquisition system is embedded as an embedded unit device in the mobile device 4, including a first optical projector 1, an image acquisition unit 2, a second optical projector 3, and the applied processor is an AP processing in the mobile device. Device.
  • the mobile device 4 is a mobile phone; the location where the depth image acquisition system is embedded is the top end of the mobile device 4, and the first optical projector 1, the image acquisition unit 2, and the second optical projector 3 are disposed in the same a plane; a distance between the at least two optical projectors and the image acquisition unit is different.
  • the mobile device 4 embedded in the image acquisition system can be used to acquire a depth image of the target, and can be further used for applications such as 3D scanning, 3D modeling, 3D recognition, and the like.
  • the mobile device 4 may also be a PAD, a computer, a smart TV, etc.; the embedded location may also be other parts, such as a side, a bottom, a back, and the like.
  • the method for acquiring a depth image by the mobile device 4 embedded in the image acquisition system of the present embodiment includes the following steps:
  • the first optical projector 1 is for emitting a first structured light image of a first wavelength
  • the second optical projector 3 is for emitting a second structured light image of a second wavelength
  • the first wavelength And the second wavelength is different wavelengths of infrared light
  • the first structured light image and the second structured light image have different light intensities
  • the first structured light image and the second structured light image have different pattern densities .
  • the structured light image may be, for example, an infrared or ultraviolet light image; the types of structured light are also more, such as speckles, stripes, etc.; the first optical projector 1 and the second optical
  • the light source of the projector 3 may be a VCSEL array laser.
  • the first optical projector 1, the image capturing unit 2, and the second optical projector 3 are disposed on the same baseline, and the first optical projector 1 and the second optical projector 3 are respectively located on both sides of the image capturing unit 2, And the distance between the first optical projector 1 and the image acquisition unit 2 is greater than the distance between the second optical projector 3 and the image acquisition unit 2.
  • the positions of the first optical projector 1, the image capturing unit 2, and the second optical projector 3 relative to each other may not be limited; or other of the first optical projectors 1 and The distance from the second optical projector 3 to the image acquisition unit 2 is differently set.
  • the image pickup unit 2 includes a filter unit 21 and an image sensor unit 22; the filter unit 21 includes a first filter unit and a second filter unit and respectively Light passing through the first and second wavelengths is allowed; the image sensor unit 22 is for acquiring an optical image and transmitting the optical image to a processor unit.
  • the points in space are imaged by the light 6 via the lens 7 and imaged on the pixels of the image sensor, which is used to convert the light intensity into a corresponding digital signal.
  • the image sensor may be a CMOS or a CCD.
  • the processor unit used in this embodiment is an AP processor in the mobile device 4 for receiving the optical image, and processing and calculating a depth image.
  • the processor unit may also include multiple processors, such as a dedicated SOC chip dedicated for deep acquisition and an AP processor in a mobile device, where the dedicated SOC chip is used to calculate depth Images, while the AP processor can be used for image fusion and other functions.
  • processors such as a dedicated SOC chip dedicated for deep acquisition and an AP processor in a mobile device, where the dedicated SOC chip is used to calculate depth Images, while the AP processor can be used for image fusion and other functions.
  • the processor unit includes: one or more processors; a memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the program An instruction for performing the steps of: receiving the optical image; calculating a first structured light image and a second structured light image from the optical image; calculating a first depth image using the first wavelength structured light image, utilizing The two-wavelength structured light image calculates a second depth image.
  • the processor unit is further configured to control a projection of the optical projection unit and the image acquisition unit to perform image acquisition.
  • the processor unit is further configured to control a projection of the optical projection unit or the image acquisition unit to perform image acquisition.
  • the depth image calculates a deviation value of each pixel by calculating a deviation value between the structured light image and the reference structured light image, and calculates a depth value of each pixel according to the deviation value according to the deviation value; the reference structured light image is prior to
  • the image acquisition unit knows the structured light image acquired on a plane in a distance.
  • the calculation program of the processor unit is further configured to fuse the first depth image and the second depth image to obtain a third depth image.
  • the merging includes replacing the corresponding depth value in the first or second depth image with an effective depth value in the second or first depth image with reference to the first or second depth image
  • the effective depth value refers to a depth value on a pixel in the first or second depth image where the pixel value is a hole and the second or first depth image is not a hole.
  • the merging includes: a pixel value obtained by weighting and averaging corresponding pixel values in the first depth image and the second depth image as a pixel value of the fused depth image.
  • the merging includes calculating a pixel value of the sub-pixel by using the corresponding pixel value in the first depth image and the second depth image to improve a resolution of the depth image.
  • the method for processing and calculating the depth image by the processor unit may be used in whole or in part according to actual needs.
  • FIG. 2 it is a schematic diagram of the depth image acquisition system of the present embodiment.
  • the depth image acquisition system is an independent device including a first optical projector 1, an image acquisition unit 2, a second optical projector 3, and a processor unit 5.
  • the method for acquiring a depth image by the depth image acquisition system includes the following steps:
  • the optical projection unit includes a first optical projector 1 and a second optical projector 3; the first optical projector 1 is for emitting a first structured light image of a first wavelength, and the second optical projector 3 a second structured light image for emitting a second wavelength;
  • the image pickup unit 2 includes a filter unit 21 and an image sensor unit 22;
  • the filter unit 21 includes a first filter unit and a second filter unit and respectively allows passage through the first wavelength and a second wavelength of light;
  • the image sensor unit 22 is configured to acquire an optical image and transmit the optical image to a processor unit;
  • the processor unit 5 is configured to receive the optical image, and process and calculate a depth image.
  • the depth image acquisition system is used as an independent device, and is connected to other devices through an interface for outputting/inputting data, where the interface is a USB interface.
  • the depth image acquisition system further includes a storage unit for storing the acquired depth image.
  • the output/input data may also pass other types of interfaces, WIFI, and the like.
  • FIG. 4 it is a schematic diagram of a filter unit of an image acquisition unit according to an embodiment of the present invention.
  • a Bayer filter used in a conventional RGB camera.
  • the filter has the same number of filter elements as the image sensor.
  • the Bayer filter has filters for red, green, and blue light. Unit, and considering that the human eye is more sensitive to green light, the ratio of the three is R (25%): G (50%): B (25%).
  • the depth image acquisition system includes a first optical projector 1, an image acquisition unit 2, a second optical projector 3, and a processor unit 5.
  • the filter unit 21 of the image acquisition unit 2 is composed of two parts, wherein IR1 and IR2 represent two kinds of infrared light with different wavelengths, and pixels corresponding to IR1 can acquire infrared images of IR1 wavelength, and pixels corresponding to IR2 will be An infrared image of the IR2 wavelength is acquired.
  • the first optical projector 1 emits IR1 infrared light
  • the second optical projector 3 is used to emit IR2 infrared structured light, so that the image sensor 22 simultaneously records the first optical projector 1 and the second optical projector 3 Structured light information.
  • the ratio of the two kinds of information is 1:1, and the intensity information of another component on each pixel needs to be restored by interpolation, thereby finally achieving synchronization.
  • FIG. 6 it is a method for acquiring a depth image by the depth image acquisition system of this embodiment.
  • a computer readable storage medium storing a computer program for use with a depth image acquisition device, the computer program being executed by a processor to implement any of the present invention The method.
  • the first optical projector 1 and the second optical projector 3 respectively emit near and far infrared light, so the IR1 and IR2 of the filter are respectively used to acquire the near infrared image and the far infrared. image. It should be noted that in other alternative embodiments of the invention, combinations and applications of any other wavelengths may be employed.
  • the depth image acquisition system includes a first optical projector 1, an image acquisition unit 2, a second optical projector 3, and a processor unit 5.
  • the method for processing the optical image by the processor unit comprises: calculating a first structured light image and a second structured light image from the optical image; acquiring the depth image comprises: calculating by using a first wavelength structured light image Deriving a first depth image; calculating a second depth image using the second wavelength structured light image.
  • the image sensor 22 includes optical images of two wavelengths (such as near-infrared and far-infrared light); secondly, the optical image is output to the processor unit 5, and the optical image is divided into two by the processor unit 5, that is, a first structured light image of structured light information emitted by the first optical projector 1 and a second structured light image comprising structured light information emitted by the second optical projector 3; wherein the structured light image is further calculated by the processor unit And a second depth image; finally, the first and second depth images are merged into a third depth image and output; the first depth image and the second depth image may also be output separately.
  • two wavelengths such as near-infrared and far-infrared light
  • the principle of calculating a depth image from a structured light image is the principle of structured light triangulation. Taking a speckle image as an example, it is necessary to acquire a structured light image on a known depth plane as a reference image in advance, and then the processor The unit 5 calculates the deviation value (deformation) of each pixel by using the currently acquired structured light image and the reference image, and finally calculates the depth by using the trigonometric principle, and the calculation formula is as follows:
  • Z D refers to the depth value of the acquisition module from the three-dimensional space point, that is, the depth data to be sought
  • B is the distance between the acquisition camera and the structured light projector
  • Z 0 is the depth value of the reference image from the acquisition module
  • f is the focal length of the lens in the acquisition camera.
  • FIG. 6 it is a method for acquiring a depth image by the depth image acquisition system of this embodiment.
  • the intensity and density of the structured light pattern projected by the first optical projector 1 are greater than that of the second optical projector 3, and the first optical projector 1 and the image capturing unit 2 are further The distance between the two is also greater than the second optical projector, such that the purpose of the configuration is that the first structured light image will be able to contain a target image at a greater distance while having a better structured light signature for a distant target, thereby targeting For a distant object, the processor unit 5 can obtain more accurate first depth information; and the second structured light image can only acquire the second depth information at a short distance, and the depth information may be void for a long distance. .
  • the depth information of the object in the first depth image is more accurate and reliable, and the depth information of the object in the second depth image is more accurate and reliable, so the two depth images can be merged.
  • a method of merging is: first selecting a depth threshold, and determining, for each pixel, whether the pixel value in the first depth image and the second depth image reaches the depth threshold, and if the threshold is lower than the threshold, selecting a second depth
  • the pixel value in the image is taken as the pixel value of the pixel, and vice versa.
  • a third depth image can be obtained, and each pixel in the third depth image will have higher precision than the first and second depth images.
  • Another way of fusion is to select a weighted averaging scheme, that is, weighted averaging the corresponding pixels in the first depth image and the second depth image to obtain a third depth image with higher precision.
  • the weighting factor can be a variable. For example, for a close object, the pixel value in the second depth image will have a higher weight.
  • Another way of fusion is to create an image with higher resolution than the current acquisition camera sensor, and calculate the pixel values of each pixel in the image according to the pixels in the first depth image and the second depth image, and finally obtain higher Depth image of resolution.
  • the first depth image is used as the reference image
  • the second depth image is combined with the second depth image to calculate the value of the 1/2, 1/4, etc. sub-pixel of the first depth image, thereby improving the resolution of the depth image.
  • the first optical projector 1 and the second optical projector 3 are respectively located at two sides of the image capturing unit 2, and for a certain object, the following phenomenon may occur, that is, the object is left in the first depth image.
  • the depth information of the partial area on the side cannot be obtained, and the depth information of the partial area on the right side of the object in the second depth image cannot be obtained.
  • This phenomenon is ubiquitous in depth cameras consisting of a single optical projector and a single image acquisition unit because the raised side cannot be illuminated by the optical projector due to the protrusions, similar to the shaded areas in optical illumination. .
  • the pixel values may be complemented by the first depth image and the second depth image, and the shaded regions where the depth information is empty will not appear in the complementary third depth image.
  • the image acquisition system may include a plurality of optical projectors, such as three or four, etc. according to actual needs; the spatial arrangement of the optical projector is not specific
  • the application of the limitation is the same as the essential principle of the above embodiment, and therefore will not be described again. It should be noted that the number of optical projectors is different and the specific setting method is different; the number of filters of the corresponding image acquisition unit will be different, and the ultimate goal is to ensure that all the light projected by the optical projector can pass the filtering.
  • the image sensor is used by the image sensor to receive all of the light passing through the filter into an optical image and transmit the optical image to the processor unit; the corresponding processor unit acquires the optical image and calculates the corresponding structured light image corresponding to Depth image, and further fusion of depth image, the number of depth images is different, and the specific fusion mode is slightly different, but all belong to the scope protected by the present invention; using the depth image acquisition system and method of the present invention, Depending on the need, the images of the plurality of optical projectors are differently configured to emit structured light images of multiple wavelengths; the image acquisition unit is used to realize synchronous acquisition of images of different wavelengths, and the processor unit acquires the optical images and processes the depth images without parallax, depth Images can correspond to depth images at different angles Shadowing problems single depth image generated by the depth image may correspond to different distances greater depth measurement range, specific application specific problems should also be considered as other aspects of the scope of the claimed invention.

Abstract

本申请公开了一种深度图像获取系统和方法。所述深度图像获取系统包括:光学投影单元,包括至少两个光学投影仪;所述至少两个光学投影仪用于发射各自波长的结构光图像;图像采集单元,包括滤光片和图像传感器;处理器单元,用于接收光学图像,并将其处理得到深度图像。本申请的有益效果为:提供一种深度图像获取系统和方法,光学投影单元用于发射至少两个波长的结构光图像;利用图像采集单元实现不同波长图像的同步采集,处理器单元获取所述光学图像并处理得到没有视差的深度图像,深度图像可以分别对应不同角度的深度图像以消除单幅深度图像产生的阴影问题,也可以分别对应不同距离的深度图像以实现更大深度范围的测量。

Description

深度图像获取系统和方法 技术领域
本发明涉及光学投影及测量技术领域,尤其涉及一种深度图像获取系统和方法。
背景技术
深度相机可以用来获取物体的深度图像,进一步可以进行3D建模、骨架提取等,在3D测量以及人机交互等领域有着非常广泛的应用。结构光深度相机作为深度相机的一种,由于其成本低、成像分辨率高等优势目前应用最为广泛,尽管如此,仍存在一些问题。深度相机的测量范围有限,并且测量精度会随着测量距离呈指数下降;目前普通使用的单投影模组加单成像相机组成的深度相机的深度图像往往有阴影区域。深度相机获取的深度图像存在这些问题对深度相机的应用产生负面的影响,特别是对测量范围、测量精度等有较高要求的应用。
发明内容
本发明为了解决现有技术中不能获取深度信息的阴影区域以及测量精度随着测量距离急剧加大的问题,提供一种深度图像获取系统和方法。
为解决上述问题,本发明采用如下技术方案:
一种深度图像获取系统,包括:
光学投影单元,包括至少两个光学投影仪;所述至少两个光学投影仪用于发射各自波长的结构光图像;
图像采集单元,包括滤光片和图像传感器;所述滤光片包括至少两个滤光片单元分别允许通过所述至少两个光学投影仪发射出的光;所述图像传感器用于接收通过所述滤光片的光转换成光学图像并将所述光学图像传送给处理器单元;
处理器单元,用于接收所述光学图像,并将其计算得到深度图像。
优选地,还包括存储单元,用于存储所述深度图像。
优选地,所述处理器单元包括:一个或多个处理器;存储器;以及一个或多个程序,其被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述程序包括用于执行以下步骤的指令:接收所述光学图像;将所述光学图像计算出所述至少两个投影仪对应的结构光图像;利用所述至少两个结构光图像计算出对应的深度图像。
优选地,所述处理器单元还用于控制所述光学投影单元的投影和/或所述图像采集单元进行图像采集。
优选地,所述至少两个结构光图像在波长、光强、图案密度方面,至少有一个方面不同。
优选地,所述至少两个光学投影仪与所述图像采集单元设置在同一平面;所述至少两个光学投影仪与所述图像采集单元之间的距离不同。
优选地,所述光学投影仪光源为VCSEL阵列激光。
一种采用以上任一所述的深度图像获取系统获取深度图像的方法,包括以下步骤:
S1:利用光学投影单元的至少两个光学投影仪分别向目标空间发射各自波长的结构光图像;
S2:利用图像采集单元获取光学图像并将所述光学图像传送给处理器单元;
S3:利用处理器单元接收所述光学图像并进行计算获取所述深度图像。
优选地,步骤S3中所述获取深度图像的方法包括利用三角法原理计算出各像素的深度值。
优选地,步骤S3中获取所述深度图像包括所述处理器单元融合所述至少两个深度图像得到合并深度图像。
优选地,所述融合包括:以所述至少两个深度图像中的任意一个深度图像为参照深度图像,用所述至少两个深度图像中剩下的深度图像中的有效深度值取代所述参照深度图像中相应的深度值,所述有效深度值指的是所述参照深度图像中像素值为空洞而所述剩下的深度图像中不为空洞的像素上的深度值。
优选地,所述融合包括:将所述至少两个深度图像中对应像素值加权平均后的像素值作为融合后深度图像的像素值。
优选地,所述融合包括:利用所述至少两个深度图像中所述对应像素值计算亚像素的像素值以提高深度图像的分辨率。
一种计算机可读存储介质,其存储有与深度图像获取设备结合使用的计算机程序,所述计算机程序被处理器执行以上任一所述方法。
本发明的有益效果为:提供一种深度图像获取系统,光学投影单元用于发射至少两个波长的结构光图像;利用图像采集单元实现不同波长图像的同步采集, 处理器单元获取所述光学图像并处理得到没有视差的深度图像,深度图像可以分别对应不同角度的深度图像以消除单幅深度图像产生的阴影问题,也可以分别对应不同距离的深度图像以实现更大深度范围的测量。
附图说明
图1是本发明实施例1的图像获取系统置于移动设备中的示意图。
图2是本发明实施例2的深度图像获取系统示意图。
图3是本发明实施例1和2的图像采集单元示意图。
图4是本发明实施例3的图像采集单元的滤光片单元的示意图。
图5是本发明实施例4的处理器单元处理图像过程示意图。
图6是本发明实施例1、2、3和4的获取深度图像的方法示意图。
其中,1-第一光学投影仪、2-图像采集单元、21-滤光片单元、22-图像传感器单元、3-第二光学投影仪、4-移动设备、5-处理器单元、6-光线、7-透镜。
具体实施方式
下面结合附图通过具体实施例对本发明进行详细的介绍,以使更好的理解本发明,但下述实施例并不限制本发明范围。另外,需要说明的是,下述实施例中所提供的图示仅以示意方式说明本发明的基本构思,附图中仅显示与本发明中有关的组件而非按照实际实施时的组件数目、形状及尺寸绘制,其实际实施时各组件的形状、数量及比例可为一种随意的改变,且其组件布局形态也可能更为复杂。
实施例1
如图1所示,是本发明实施例的图像获取系统置于移动设备中的示意图,是本发明所述的深度图像获取系统作为移动设备内置单元的具体应用。深度图像获取系统作为一个嵌入式单元器件被嵌入到移动设备4中,包括第一光学投影仪1、图像采集单元2、第二光学投影仪3,所应用的处理器为移动设备中的AP处理器。在本实施例中,所述的移动设备4是手机;深度图像获取系统嵌入的位置是移动设备4的顶端,第一光学投影仪1、图像采集单元2、第二光学投影仪3设置在同一平面;所述至少两个光学投影仪与所述图像采集单元之间的距离不同。嵌入图像获取系统的移动设备4可以用于获取目标的深度图像,进一步的可以用来进行3D扫描、3D建模、3D识别等应用。在本实施例的一些变通实施例中, 上述移动设备4还可以是PAD、计算机、智能电视等;嵌入的位置也可以是其他部位,比如侧面、底端、背面等。
如图6所示,本实施例嵌入图像获取系统的移动设备4获取深度图像的方法包括如下步骤:
(1)所述第一光学投影仪1用于发射第一波长的第一结构光图像,所述第二光学投影仪3用于发射第二波长的第二结构光图像;所述第一波长和第二波长为不同波长的红外光;所述第一结构光图像和所述第二结构光图像的光强不同;所述第一结构光图像和所述第二结构光图像的图案密度不同。
其中,在本实施例的一些变通实施例中,结构光图像可以是比如红外、紫外光图像;结构光的种类也较多,比如散斑、条纹等;第一光学投影仪1和第二光学投影仪3的光源可以为VCSEL阵列激光。
第一光学投影仪1、图像采集单元2、第二光学投影仪3被配置在同一条基线上,第一光学投影仪1和第二光学投影仪3可以分别位于图像采集单元2的两侧,且第一光学投影仪1与图像采集单元2之间的距离要大于第二光学投影仪3与图像采集单元2之间的距离。
在本实施例的一些变通实施例中,第一光学投影仪1、图像采集单元2、第二光学投影仪3相互之间的位置可以不做限定;或者其他所述第一光学投影仪1和所述第二光学投影仪3到所述图像采集单元2的距离不同设置。
(2)如图3所示,所述图像采集单元2包括滤光片单元21和图像传感器单元22;所述滤光片单元21包括第一滤光片单元和第二滤光片单元并分别允许通过所述第一波长和第二波长的光;所述图像传感器单元22用于获取光学图像并将所述光学图像传送给处理器单元。空间中的点通过光线6经由透镜7聚焦后成像在图像传感器的像素上,图像传感器用于将光强转化成对应的数字信号。深度图像获取系统中的图像采集单元2只有一个,用于同步采集第一光学投影仪1和第二光学投影仪3的结构光图像。
在本实施例的变通实施例中,图像传感器可以是CMOS或CCD。
(3)本实施例中所用的处理器单元为移动设备4中的AP处理器用于接收所述光学图像,并将其处理、计算得到深度图像。
在本实施例的一些变通实施例中,处理器单元也可以包括多个处理器,比如由专门用于深度获取的专用SOC芯片以及移动设备中的AP处理器,其中专用SOC芯片用于计算深度图像,而AP处理器则可以用于图像融合等功能。
所述处理器单元包括:一个或多个处理器;存储器;以及一个或多个程序,其被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述程序包括用于执行以下步骤的指令:接收所述光学图像;将所述光学图像计算出第一结构光图像和第二结构光图像;利用第一波长结构光图像计算出第一深度图像,利用第二波长结构光图像计算出第二深度图像。
所述处理器单元还用于控制所述光学投影单元的投影和所述图像采集单元进行图像采集。
在本实施例的变通实施例中,所述处理器单元还用于控制所述光学投影单元的投影或所述图像采集单元进行图像采集。
所述深度图像通过对计算所述结构光图像与参考结构光图像之间个像素的偏离值,并根据偏离值利用三角法原理计算出各像素的深度值;所述参考结构光图像是预先在于所述图像采集单元已知距离上的平面上采集的所述结构光图像。
所述处理器单元的计算程序还用于融合所述第一深度图像与所述第二深度图像得到第三深度图像。
所述融合包括:以所述第一或第二深度图像为参照,用所述第二或第一深度图像中的有效深度值取代所述第一或第二深度图像中相应的深度值,所述有效深度值指的是第一或第二深度图像中像素值为空洞而第二或第一深度图像中不为空洞的像素上的深度值。
所述融合包括:将所述第一深度图像与所述第二深度图像中对应像素值加权平均后的像素值作为融合后深度图像的像素值。
所述融合包括:利用所述第一深度图像与所述第二深度图像中所述对应像素值计算亚像素的像素值以提高深度图像的分辨率。
上述所述处理器单元处理、计算获取所述深度图像的方法,根据实际需要,可以全部采用也可以部分采用。
实施例2
如图2所示,是本实施例的深度图像获取系统的示意图。深度图像获取系统为独立的设备,包括第一光学投影仪1、图像采集单元2、第二光学投影仪3和处理器单元5。
如图6所示,深度图像获取系统获取深度图像的方法包括如下步骤:
(1)光学投影单元包括第一光学投影仪1和第二光学投影仪3;所述第一光学投影仪1用于发射第一波长的第一结构光图像,所述第二光学投影仪3用于发射第二波长的第二结构光图像;
(2)图像采集单元2包括滤光片单元21和图像传感器单元22;所述滤光片单元21包括第一滤光片单元和第二滤光片单元并分别允许通过所述第一波长和第二波长的光;所述图像传感器单元22用于获取光学图像并将所述光学图像传送给处理器单元;
(3)处理器单元5用于接收所述光学图像,并将其处理、计算得到深度图像。
不同于实施例1,本实施例中深度图像获取系统作为独立设备,通过接口与其他设备连接用于输出/输入数据,这里的接口为USB接口。在本实施例中,深度图像获取系统还包括存储单元,用于存储获取的深度图像。
在本实施例的变通实施例中,输出/输入数据还可以通过其他类型的接口、WIFI等。
实施例3
如图4所示,是本发明实施例的图像采集单元的滤光片单元的示意图。普通的RGB相机采用的拜尔滤光片,滤光片拥有与图像传感器像素数量相同并一一对应的滤光单元,拜尔滤光片分别有用于通过红光、绿光以及蓝光的滤光单元,且考虑到人眼对绿光更加敏感,因而三者的比例为R(25%):G(50%):B(25%)。而本实施例中,深度图像获取系统包括第一光学投影仪1、图像采集单元2、第二光学投影仪3和处理器单元5。其中,图像采集单元2的滤光片单元21由两个部分构成,其中IR1与IR2代表波长不同的两种红外光,IR1对应的像素将可以采集到IR1波长的红外图像,IR2对应的像素将采集到IR2波长的红外图像。第一光学投影仪1发射IR1红外光,第二光学投影仪3用于发射IR2红外结构光,因此图像传感器22上同时记录了含有第一光学投影仪1和第二光学投影仪3所发 射的结构光信息。由于每一种信息都只占有部分的像素,在本实施例中两种信息的比例为1:1,需要通过插值的方式恢复每个像素上的另一种分量的强度信息,从而最终实现同步获取完整的第一结构光图像与第二结构光图像。插值采用加权平均的方法。
在本实施例的变通实施例中,可以采用其他插值的方法,由于为已有技术因而在这里不加以详述。
如图6所示,是本实施例深度图像获取系统获取深度图像的方法。
在本实施例的变通实施例中,存在一种计算机可读存储介质,其存储有与深度图像获取设备结合使用的计算机程序,所述计算机程序被处理器执行以实现本发明所述的任一所述方法。
在本实施例的变通实施例中,第一光学投影仪1与第二光学投影仪3分别发射近、远红外光,因此滤光片的IR1、IR2则分别用于获取近红外图像及远红外图像。需要注意的是,在本发明的其他变通实施例中,可能采用任何其他波长的组合及应用。
实施例4
如图5所示,是根据本发明的一个实施例的处理器单元处理图像的示意图。深度图像获取系统包括第一光学投影仪1、图像采集单元2、第二光学投影仪3和处理器单元5。所述处理器单元5理所述光学图像的方法包括:由所述光学图像计算出第一结构光图像和第二结构光图像;获取所述的深度图像包括:利用第一波长结构光图像计算出第一深度图像;利用第二波长结构光图像计算出第二深度图像。
首先由图像传感器22包含两种波长(如近红外、远红外光)的光学图像;其次将该光学图像输出到处理器单元5,由处理器单元5将该光学图像一分为二,即包含第一光学投影仪1发射的结构光信息的第一结构光图像以及包含第二光学投影仪3发射的结构光信息的第二结构光图像;其中结构光图像将进一步由处理器单元计算得到第一与第二深度图像;最后将第一与第二深度图像融合成第三深度图像并输出;第一深度图像与第二深度图像也可以单独进行输出。
由结构光图像计算深度图像的原理即为结构光三角法原理。以散斑图像为例,预先需要对采集一幅已知深度平面上的结构光图像为参考图像,然后处理器 单元5利用当前获取的结构光图像与参考图像,通过图像匹配算法计算各个像素的偏离值(变形),最后利用三角法原理可以计算出深度,计算公式如下:
Figure PCTCN2017089036-appb-000001
其中,ZD指三维空间点距离采集模组的深度值,即待求的深度数据,B是采集相机与结构光投影仪之间的距离,Z0为参考图像离采集模组的深度值,f为采集相机中透镜的焦距。
根据光学投影仪配置的不同,上述图像处理的具体方法也有区别。
如图6所示,是本实施例深度图像获取系统获取深度图像的方法。
在本实施例的一种变通实施方式中,第一光学投影仪1所投影的结构光图案强度及密度均大于第二光学投影仪3,另外第一光学投影仪1与所述图像采集单元2之间的距离也大于第二光学投影仪,这样配置的目的在于,第一结构光图像将可以包含更远距离的目标图像,同时对于远距离的目标拥有更好的结构光特征,由此针对较远距离的物体,可以由处理器单元5获取更加准确的第一深度信息;而第二结构光图像仅能获取近距离的第二深度信息,对于远距离的深度信息可能会出现空洞等现象。由于第一结构光图像与第二结构光图像是由同一个图像传感器获取的,因而二者之间没有视差,因此得到的第一深度图像与第二深度图像之间的像素也是一一对应的,根据前述第一深度图像中较远距离物体的深度信息更加准确可靠,而第二深度图像中较近距离物体的深度信息更加准确可靠,因此可以将这两幅深度图像进行融合。
一种融合方式为:首先选取一个深度阈值,对于每个像素,判断第一深度图像与第二深度图像中的该像素值是否达到所述深度阈值,若低于该阈值,则选取第二深度图像中的像素值作为该像素的像素值,反之则选取第一深度图像。经过该融合后可以得到第三深度图像,第三深度图像中的各像素将拥有比第一及第二深度图像更高的精度。
另一种融合方式为:选择一个加权平均方案,即通过该加权平均方案将第一深度图像与第二深度图像中对应的像素进行加权平均得到精度更高的第三深度图像。加权系数可以是个变量,比如对于近距离的物体,第二深度图像中的像素值将拥有更高的权重。
再一种融合方式为:创建一个比当前采集相机传感器分辩率更高的图像,根据第一深度图像与第二深度图像中的像素逐个计算创建图像中各个像素的像素值,最终可以获取更高分辩率的深度图像。举例来说,以第一深度图像为参考图像,结合第二深度图像来计算第一深度图像的1/2,1/4等亚像素的值,从而提高深度图像的分辨率。
在另一个实施例中,第一光学投影仪1与第二光学投影仪3分别位于图像采集单元2的两侧,对于某一被物体,可能会出现以下现象,即第一深度图像中物体左侧的部分区域的深度信息无法获取,而第二深度图像中物体右侧的部分区域的深度信息无法获取。这一现象在由单个光学投影仪以及单个图像采集单元组成的深度相机中普遍存在,原因是由于物体由于凸起导致凸起一侧无法被光学投影仪照射到,类似于光学照明中的阴影区域。针对这一情形,就可以将第一深度图像与第二深度图像进行像素值互补,互补后的第三深度图像中就不会出现深度信息为空的阴影区域。
在实施例1、2、3或4的一些变通实施例中,图像获取系统可以根据实际需要包括多个光学投影仪,比如三个或四个等;所述光学投影仪的在空间设置没有具体限制,其应用跟上述实施例本质原理相同,因此不再赘述。需要注意的是,光学投影仪数量的不同以及具体设置方式的不同;与其对应的图像采集单元的滤光片数量会有不同,最终目的是保证所有光学投影仪投射出的光均可以通过滤光片,并由图像传感器用于接收所有通过滤光片的光转换成光学图像并将所述光学图像传送给处理器单元;相应的处理器单元获取光学图像并计算得到的各个结构光图像对应的深度图像,并可以进一步进行深度图像的融合,深度图像的数量不同其具体的融合方式会略有不同,但都属于本发明所保护的范围;使用本发明所述的深度图像获取系统和方法,根据需要不同设置多个光学投影仪的图像发射多个波长的结构光图像;利用图像采集单元实现不同波长图像的同步采集,处理器单元获取所述光学图像并处理得到没有视差的深度图像,深度图像可以分别对应不同角度的深度图像以消除单幅深度图像产生的阴影问题,也可以分别对应不同距离的深度图像以实现更大深度范围的测量,针对具体问题的其他方面的具体应用也应视为本发明所保护的范围。
以上内容是结合具体的优选实施方式对本发明所作的进一步详细说明,不能认定本发明的具体实施只局限于这些说明。对于本发明所属技术领域的技术人员来说,在不脱离本发明构思的前提下,还可以做出若干等同替代或明显变型,而且性能或用途相同,都应当视为属于本发明的保护范围。

Claims (14)

  1. 一种深度图像获取系统,其特征在于,包括:
    光学投影单元,包括至少两个光学投影仪;所述至少两个光学投影仪用于发射各自波长的结构光图像;
    图像采集单元,包括滤光片和图像传感器;所述滤光片包括至少两个滤光片单元分别允许通过所述至少两个光学投影仪发射出的光;所述图像传感器用于接收通过所述滤光片的光转换成光学图像并将所述光学图像传送给处理器单元;
    处理器单元,用于接收所述光学图像,并将其计算得到深度图像。
  2. 如权利要求1所述的深度图像获取系统,其特征在于,还包括存储单元,用于存储所述深度图像。
  3. 如权利要求1所述的深度图像获取系统,其特征在于,所述处理器单元包括:一个或多个处理器;存储器;以及一个或多个程序,其被存储在所述存储器中,并被配置成由所述一个或多个处理器执行,所述程序包括用于执行以下步骤的指令:接收所述光学图像;将所述光学图像计算出所述至少两个投影仪对应的结构光图像;利用所述至少两个结构光图像计算出对应的深度图像。
  4. 如权利要求1所述的深度图像获取系统,其特征在于,所述处理器单元还用于控制所述光学投影单元的投影和/或所述图像采集单元进行图像采集。
  5. 如权利要求1所述的深度图像获取系统,其特征在于,所述至少两个结构光图像在波长、光强、图案密度方面,至少有一个方面不同。
  6. 如权利要求1所述的深度图像获取系统,其特征在于,所述至少两个光学投影仪与所述图像采集单元设置在同一平面;所述至少两个光学投影仪与所述图像采集单元之间的距离不同。
  7. 如权利要求1所述的深度图像获取系统,其特征在于,所述光学投影仪光源为VCSEL阵列激光。
  8. 一种采用如权利要求1-7任一所述的深度图像获取系统获取深度图像的方法,包括以下步骤:
    S1:利用光学投影单元的至少两个光学投影仪分别向目标空间发射各自波长的结构光图像;
    S2:利用图像采集单元获取光学图像并将所述光学图像传送给处理器单元;
    S3:利用处理器单元接收所述光学图像并进行计算获取所述深度图像。
  9. 如权利要求8所述的获取深度图像的方法,其特征在于,步骤S3中所述获取深度图像的方法包括利用三角法原理计算出各像素的深度值。
  10. 如权利要求8所述的获取深度图像的方法,其特征在于,步骤S3中获取所述深度图像包括所述处理器单元融合所述至少两个深度图像得到合并深度图像。
  11. 如权利要求10所述的获取深度图像的方法,其特征在于,所述融合包括:以所述至少两个深度图像中的任意一个深度图像为参照深度图像,用所述至少两个深度图像中剩下的深度图像中的有效深度值取代所述参照深度图像中相应的深度值,所述有效深度值指的是所述参照深度图像中像素值为空洞而所述剩下的深度图像中不为空洞的像素上的深度值。
  12. 如权利要求10所述的获取深度图像的方法,其特征在于,所述融合包括:将所述至少两个深度图像中对应像素值加权平均后的像素值作为融合后深度图像的像素值。
  13. 如权利要求10所述的获取深度图像的方法,其特征在于,所述融合包括:利用所述至少两个深度图像中所述对应像素值计算亚像素的像素值以提高深度图像的分辨率。
  14. 一种计算机可读存储介质,其存储有与深度图像获取设备结合使用的计算机程序,所述计算机程序被处理器执行以实现权利要求8-13任一所述方法。
PCT/CN2017/089036 2017-03-09 2017-06-19 深度图像获取系统和方法 WO2018161466A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710138628.1A CN106954058B (zh) 2017-03-09 2017-03-09 深度图像获取系统和方法
CN201710138628.1 2017-03-09

Publications (1)

Publication Number Publication Date
WO2018161466A1 true WO2018161466A1 (zh) 2018-09-13

Family

ID=59466840

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/089036 WO2018161466A1 (zh) 2017-03-09 2017-06-19 深度图像获取系统和方法

Country Status (2)

Country Link
CN (1) CN106954058B (zh)
WO (1) WO2018161466A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139998A (zh) * 2021-04-23 2021-07-20 北京华捷艾米科技有限公司 深度图像的生成方法及装置、电子设备、计算机存储介质

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107493411B (zh) * 2017-08-09 2019-09-13 Oppo广东移动通信有限公司 图像处理系统及方法
CN107395974B (zh) * 2017-08-09 2019-09-13 Oppo广东移动通信有限公司 图像处理系统及方法
CN107493412B (zh) * 2017-08-09 2019-09-13 Oppo广东移动通信有限公司 图像处理系统及方法
CN107610127B (zh) * 2017-09-11 2020-04-03 Oppo广东移动通信有限公司 图像处理方法、装置、电子装置和计算机可读存储介质
WO2019047984A1 (zh) * 2017-09-11 2019-03-14 Oppo广东移动通信有限公司 图像处理方法和装置、电子装置和计算机可读存储介质
CN107749070B (zh) * 2017-10-13 2020-06-02 京东方科技集团股份有限公司 深度信息的获取方法和获取装置、手势识别设备
CN107741682A (zh) * 2017-10-20 2018-02-27 深圳奥比中光科技有限公司 光源投影装置
CN109842789A (zh) * 2017-11-28 2019-06-04 奇景光电股份有限公司 深度感测装置及深度感测方法
CN108333858A (zh) * 2018-01-23 2018-07-27 广东欧珀移动通信有限公司 激光发射器、光电设备、深度相机和电子装置
CN108107663A (zh) * 2018-01-23 2018-06-01 广东欧珀移动通信有限公司 激光发射器、光电设备、深度相机和电子装置
CN108564614B (zh) * 2018-04-03 2020-09-18 Oppo广东移动通信有限公司 深度获取方法和装置、计算机可读存储介质和计算机设备
CN108924408B (zh) * 2018-06-15 2020-11-03 深圳奥比中光科技有限公司 一种深度成像方法及系统
CN109190484A (zh) 2018-08-06 2019-01-11 北京旷视科技有限公司 图像处理方法、装置和图像处理设备
CN110823515B (zh) * 2018-08-14 2022-02-01 宁波舜宇光电信息有限公司 一种结构光投射模组多工位检测装置及其检测方法
CN109756660B (zh) * 2019-01-04 2021-07-23 Oppo广东移动通信有限公司 电子设备和移动平台
WO2020206666A1 (zh) * 2019-04-12 2020-10-15 深圳市汇顶科技股份有限公司 基于散斑图像的深度估计方法及装置、人脸识别系统
CN110095781B (zh) * 2019-05-06 2021-06-01 歌尔光学科技有限公司 基于lbs投影系统的测距方法、设备及计算机可读存储介质
CN114543696B (zh) * 2020-11-24 2024-01-23 瑞芯微电子股份有限公司 一种结构光成像装置、方法、介质及电子设备
CN114219841B (zh) * 2022-02-23 2022-06-03 武汉欧耐德润滑油有限公司 基于图像处理的润滑油罐体参数自动识别方法

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365397A (zh) * 2005-12-08 2009-02-11 彼得·S·乐芙莉 红外牙齿成像
CN102760234A (zh) * 2011-04-14 2012-10-31 财团法人工业技术研究院 深度图像采集装置、系统及其方法
US20140043309A1 (en) * 2012-08-10 2014-02-13 Nakhoon Go Distance detecting device and image processing apparatus including the same
US20160349369A1 (en) * 2014-01-29 2016-12-01 Lg Innotek Co., Ltd. Device for extracting depth information and method thereof

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8976249B2 (en) * 2011-11-04 2015-03-10 Empire Technology Development Llc IR signal capture for images
CN202794585U (zh) * 2012-08-30 2013-03-13 广州中国科学院先进技术研究所 一种多通道集成滤光片
KR20140075163A (ko) * 2012-12-11 2014-06-19 한국전자통신연구원 구조광 방식을 활용한 패턴 프로젝팅 방법 및 장치
CN204818380U (zh) * 2015-07-15 2015-12-02 广东工业大学 近红外与结构光双波长双目视觉焊缝跟踪系统
CN105160680B (zh) * 2015-09-08 2017-11-21 北京航空航天大学 一种基于结构光的无干扰深度相机的设计方法
CN206807664U (zh) * 2017-03-09 2017-12-26 深圳奥比中光科技有限公司 深度图像获取系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101365397A (zh) * 2005-12-08 2009-02-11 彼得·S·乐芙莉 红外牙齿成像
CN102760234A (zh) * 2011-04-14 2012-10-31 财团法人工业技术研究院 深度图像采集装置、系统及其方法
US20140043309A1 (en) * 2012-08-10 2014-02-13 Nakhoon Go Distance detecting device and image processing apparatus including the same
US20160349369A1 (en) * 2014-01-29 2016-12-01 Lg Innotek Co., Ltd. Device for extracting depth information and method thereof

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139998A (zh) * 2021-04-23 2021-07-20 北京华捷艾米科技有限公司 深度图像的生成方法及装置、电子设备、计算机存储介质

Also Published As

Publication number Publication date
CN106954058B (zh) 2019-05-10
CN106954058A (zh) 2017-07-14

Similar Documents

Publication Publication Date Title
WO2018161466A1 (zh) 深度图像获取系统和方法
CN108307675B (zh) 用于vr/ar应用中的深度增强的多基线相机阵列系统架构
JP7043085B2 (ja) 視点から距離情報を取得するための装置及び方法
JP6585006B2 (ja) 撮影装置および車両
WO2019100933A1 (zh) 用于三维测量的方法、装置以及系统
CN106934394B (zh) 双波长图像采集系统及方法
TW201540066A (zh) 包括主要高解析度成像器及次要成像器之影像感測器模組
JP5834615B2 (ja) プロジェクタ、その制御方法、そのプログラム、及び、そのプログラムを記録した記録媒体
CN108702437A (zh) 用于3d成像系统的高动态范围深度生成
US20140307100A1 (en) Orthographic image capture system
WO2019184184A1 (zh) 目标图像获取系统与方法
WO2019184185A1 (zh) 目标图像获取系统与方法
JP6786225B2 (ja) 画像処理装置、撮像装置および画像処理プログラム
WO2019184183A1 (zh) 目标图像获取系统与方法
CN108399596B (zh) 深度图像引擎及深度图像计算方法
KR102472156B1 (ko) 전자 장치 및 그 깊이 정보 생성 방법
CN107370951B (zh) 图像处理系统及方法
JP5406151B2 (ja) 3次元撮像装置
US20150288945A1 (en) Generarting 3d images using multiresolution camera clusters
JP5849522B2 (ja) 画像処理装置、プロジェクタ、プロジェクタシステム、画像処理方法、そのプログラム、及び、そのプログラムを記録した記録媒体
US8929685B2 (en) Device having image reconstructing function, method, and recording medium
WO2020235067A1 (ja) 3次元計測システム及び3次元計測方法
US11175568B2 (en) Information processing apparatus, information processing method, and program as well as in interchangeable lens
KR20180000696A (ko) 적어도 하나의 라이트필드 카메라를 사용하여 입체 이미지 쌍을 생성하는 방법 및 장치
CN206807664U (zh) 深度图像获取系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17900140

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17900140

Country of ref document: EP

Kind code of ref document: A1