WO2017121058A1 - 一种全光信息采集系统 - Google Patents

一种全光信息采集系统 Download PDF

Info

Publication number
WO2017121058A1
WO2017121058A1 PCT/CN2016/083238 CN2016083238W WO2017121058A1 WO 2017121058 A1 WO2017121058 A1 WO 2017121058A1 CN 2016083238 W CN2016083238 W CN 2016083238W WO 2017121058 A1 WO2017121058 A1 WO 2017121058A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
color
optical
spectral
imaging
Prior art date
Application number
PCT/CN2016/083238
Other languages
English (en)
French (fr)
Inventor
曹汛
董辰辰
陈林森
马展
王瑶
Original Assignee
南京大学
纽约大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 南京大学, 纽约大学 filed Critical 南京大学
Publication of WO2017121058A1 publication Critical patent/WO2017121058A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Definitions

  • the invention relates to the field of computer imaging, and in particular to a system for all-optical information acquisition.
  • Computational photography is an emerging discipline that combines computer vision, digital signal processing, and graphics. It aims to improve traditional cameras and improve hardware design by combining computational, digital sensors, optical systems, and intelligent illumination. Combine with software computing capabilities, break through the limitations of classic imaging models and digital cameras, enhance or extend the data collection capabilities of traditional digital cameras, and capture real-world scene information in all directions. The research work on the all-optical information of the scene is of great significance for 3D reconstruction, digital entertainment, security reconnaissance and other fields.
  • the traditional digital camera is to sample the two-dimensional projection subspace of the high-dimensional scene signal (usually represented by the seven-dimensional all-optical function), and project the high-dimensional scene information onto the camera two-dimensional sampling subspace for acquisition.
  • the 7-dimensional all-optical function describes the basic elements of the human eye to view the objective world light: including the 7-dimensional variables of the viewpoint position (x, y, z), the ray angle, and the wavelength ( ⁇ ) at any time (t).
  • the traditional digital camera learning system generates information loss and coupling problems in other dimensions of the all-optical function, including loss of angle information, loss of scene depth information, loss of multi-spectral information, and integration of scene information during exposure time. Coupling and so on.
  • a popular research direction in computational photography is the extension of traditional imaging techniques in the spectral domain, namely hyperspectral techniques.
  • traditional imaging techniques in the spectral domain
  • hyperspectral techniques From the principle of multi-spectral and visual technology, there are three different cones in the human eye to sense the signals in different bands of the spectrum, so that the light in the real scene is in the form of red, green and blue.
  • Perception, and correspondingly, the camera in the traditional sense is also based on the cognitive principle of the human eye, which can capture the red, green and blue three-channel information in the scene by different color integration curves of the charge-coupled original.
  • the object of the present invention is to provide an all-optical information acquisition system capable of simultaneously capturing high spectral resolution and depth information.
  • An all-optical information acquisition system comprising a sparse sampling imaging array, a beam splitting device, a grayscale imaging device, a spectral light path collecting device, a color imaging device, a color light path collecting device and a information joint processing device; wherein, the sparse sampling imaging array and the light separating device The optical centers of the gray scale imaging device and the spectral light path collecting device are located on one optical axis, and the optical centers of the color imaging device and the color light path collecting device are located on another parallel optical axis; the sparse sampling imaging array passes the scene light Converging transmission generates transmitted light and spatially samples the scene; the transmitted light is dispersed into a spectrum at a plurality of wavelengths by a spectroscopic device, and then imaged by a gray scale imaging device, and finally the image is collected by the spectral light path collecting device and transmitted to the information.
  • the color image forming device and the color light path collecting device obtain a color image and transmit the image to the information joint processing device, and the information joint processing device combines the obtained two-way image with the gray image forming device and the color image forming device Relative position Information processing, bilateral filtering using the spectral reconstruction, the reconstruction algorithm using the parallax depth information, so as to obtain all-optical information of the scene.
  • the sparsely sampled imaging array includes a first lens, a mask, and a second lens that are sequentially arranged, wherein the first lens and the second lens are a single lens or a lens group, and the mask is a sparse sampling device.
  • the aperture size of the mask is such that light passing through the mask aperture does not pass over the gray scale imaging device after passing through the spectroscopic device Aliasing occurs.
  • the aperture size of the second lens can be adjusted to cover the entire range of scenes captured.
  • first lens and the second lens employ a aspherical lens group.
  • the distance between the two parallel optical axes is 5-10 cm.
  • the spectral light path collecting device and the color light path collecting device synchronously collect information.
  • the spectroscopic device is an Amish prism.
  • the grayscale imaging device and the color imaging device need to register first by using a screen mapping method before the image is captured, by: displaying an iconic pattern on the screen, keeping the screen stationary, and performing color imaging.
  • the device captures the pattern on the screen, and the corresponding relationship between the color imaging coordinates and the screen coordinates can be obtained, and then the pattern is captured by the grayscale imaging device, and the feature points of the image captured by the grayscale imaging device and the feature points of the pattern displayed on the screen are determined.
  • the correspondence between the grayscale imaging coordinates and the screen coordinates is based on the obtained two sets of correspondences, thereby establishing a mapping relationship between the grayscale imaging device and the color imaging device.
  • the spectral optical path collecting device and the color optical path collecting device both use an embedded development board for data storage.
  • the all-optical information acquisition system of the present invention can realize joint acquisition of all-optical information including spectrum and depth, and can solve the problem that the conventional imaging apparatus lacks high-resolution spectral information and depth information.
  • By adjusting the parameters of the system device it is possible to have a larger depth of field and the amount of light passing through; by using a higher-precision imaging device, it is possible to obtain a high-resolution resolution and more accurate depth information required for scene reconstruction.
  • FIG. 1 is a structural block diagram of a plenoptic information collection system according to the present invention.
  • FIG. 2 is a layout diagram of a mask in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a plenoptic information collection system according to an embodiment of the present invention. wherein 11-telescope, 12-mask, 13-convergence eyepiece, 14-split prism, 15-gray camera, 16-color camera, 17- computer;
  • the all-optical information collecting system of the present invention is composed of a first lens 2, a mask 3, and a second lens 4. Sparsely sample the imaging array 1.
  • the low resolution spectral image is acquired by the sparse sampling imaging array 1, the spectroscopic device 5, the grayscale imaging device 6, and the spectral light path collecting device 7.
  • the color imaging device 8, the color light path acquisition transposition 9, is used to acquire a high resolution color image.
  • the two-way information is processed by the information joint processing device 10 to reconstruct all-optical information including a high-resolution spectrum and a precise depth.
  • a mask 3 is located at an imaging plane between the first lens 2 and the second lens 3 for transmitting the scene light through the convergence to generate transmitted light and spatially sampling the scene;
  • the aperture of the lens determines the luminous flux
  • the focal length determines the depth of field
  • the focal depth is varied to image different depths of field of the scene.
  • the selected lens has a focal length of 150 mm and a aperture of 50 mm.
  • the mask 3 is selected to ensure the maximum density of the apertures in the absence of spectral aliasing.
  • the size of the apertures is an optimal result coordinated by the luminous flux and spectral accuracy, wherein it is necessary to ensure that the light passing through the mask apertures is split.
  • aliasing does not occur on the grayscale imaging device 6.
  • the mask holes are as shown in Fig. 2, wherein each of the mask holes has a width of 0.025 mm, a height of 0.15 mm, a mask hole lateral pitch of 0.9 mm, and a longitudinal pitch of 0.20 mm.
  • the selection of the second lens 4 is based on the size of the entire scene, and the size of the aperture can be covered to cover the range of the scene.
  • the range of the scene mainly depends on the distance between the second lens 4 and the grayscale imaging device 6, in order to ensure the imaging effect, for parallel In the case of light incidence, it is necessary to ensure that the light emitted from the second lens 4 is still parallel light. And it is required that the spectroscopic device 5 can be placed between the imaging device 6 and the second lens 4.
  • the second lens has a focal length of 100 mm and a hole diameter of 50 mm.
  • the beam splitting means 5 is located at the rear end of the sparsely sampled imaging array 1 and is an Amish prism for dispersing the transmitted light passing through the imaging array device into a spectrum at a plurality of wavelengths.
  • an amphi prism composed of a set of prisms of u14050 is used, which comprises two bismuth glasses and a flint glass. For the same wavelength, the refractive index of the flint glass is stronger. Since the Amish prism can still emit almost parallel to the parallel incident light, the imaging distortion is effectively reduced.
  • Table 1 The parameters of the Amish prism used in the examples are shown in Table 1.
  • Table 1 is a set of prism shape and refractive index data sheets used by the Amish prism
  • the gray scale imaging device 6 is for obtaining a spectral image after the splitting.
  • An example of the invention is the shooting of objects that are nearly parallel incident.
  • the outgoing light of the second lens 4 also enters the grayscale imaging device 6 in parallel, and the grayscale imaging device 6 employs a CCD grayscale camera of 50 mm focal length.
  • the imaging size is 2016*2016 pixels, and the pixel size is 3.1 ⁇ m.
  • the pixel size determines the size of the spectral resolution. The smaller the pixel, the larger the focal length and the larger the spectral resolution.
  • the first lens 2 and the second lens 4 are aspherical lens groups. Because of the inherent problem of lens processing, the image of the edge may be mirrored and tangentially distorted. To ensure image quality, the gray scale imaging device 6 The photographed area is the central area of the two lenses, and it is necessary to reduce the light of the gray scale imaging device 6, thereby increasing the depth of field and reducing the imaging distortion.
  • the sparse sampling imaging array 1, the spectroscopic device 5, the grayscale imaging device 6, and the spectral optical path collecting device 7 are located on the same optical axis to form a spectral acquisition optical path, ensuring that the optical axis passes through the optical center of the device.
  • the color imaging device 8 and the color light path collecting device 9 constitute a color imaging optical path on the same optical axis, and ensure that the optical axis passes through the optical center of the above device.
  • the optical axis where the spectral acquisition optical path is located is parallel to the optical axis of the color imaging optical path, and the two optical axes are on the same horizontal plane. The distance between the two optical axes determines the reconstruction accuracy of the depth information in the all-optical information.
  • the distance between the two optical axes is 100 mm.
  • the color imaging device 8 in the example of the present invention is a 50 mm focal length CCD color camera.
  • the resolution is 2016*2016, and the parallel arrangement of the optical axes ensures that the images obtained by the two optical paths have parallax, and it is convenient to obtain the relative posture of the grayscale camera and the color camera.
  • the spectral light path collecting means 7 and the color light path collecting means 9 simultaneously transmit the two-way image to the information joint processing means 10, and reconstruct the all-optical information including the depth and the spectrum based on the sparse spectral information and the color information and the postures of the two cameras.
  • the collecting process of the all-optical information collecting system of this embodiment includes the following steps: the scene light is concentrated and transmitted through the sparse sampling imaging array 1 to generate transmitted light, and the scene is spatially sampled by the sampling mask 3, and the transmitted light is incident on the scene.
  • the image device 6 performs imaging to generate a spectral acquisition optical path; the color imaging device 8 is used to obtain a color imaging optical path.
  • the optical paths of the spectral light path collecting device 7 and the color imaging device 8 are parallel, and the obtained two images respectively contain sparsely sampled spectral information and high resolution color information, and the obtained two images contain disparity information;
  • the joint processing device 10 processes the information of the spectral acquisition optical path and the color imaging optical path, and obtains the all-optical information of the scene based on the sparse spectral information and the color information and the parallax information caused by the postures of the two cameras.
  • bilateral filtering is used to reconstruct the spectrum: after the two images of grayscale and color are registered, both the spectral information and the RGB pixel values are known for the pixels of the sparse sampling point. For other pixels, only the RGB information is known, and the spectral information is not known. In order to obtain the spectral information for each pixel, the similarity between the sparse sampling point and the surrounding color pixels in the color space and position is used to reconstruct Produce high-precision spectral images.
  • the parallax algorithm is used to reconstruct the depth information: using the positional relationship of the corresponding points in the two images, because the positional relationship of the two cameras is different, the obtained two images have parallax, according to the sparse sampling points and the corresponding points of the RGB images. Parallax relationship, using the method of stereo matching, the depth of these points is obtained, and the depth information of other pixels except the sampling point is reconstructed by the propagation algorithm. Thereby obtaining the all-optical information of the scene.
  • the two-way imaging device needs to ensure synchronous shooting when shooting the same scene.
  • hard synchronization that is, an external trigger signal is employed.
  • Soft synchronization can be used when the accuracy requirement is not high or the scene is fixed.
  • the invention can obtain the all-optical information jointly calculated according to the information obtained by the two optical paths after the two optical paths are registered, wherein the registration between the two imaging devices is very important, because the imaging of the spectral optical path is a sparsely sampled spectral image, color Optical path acquisition is a high-resolution color image, so traditional camera registration methods cannot meet the requirements.
  • the invention adopts the screen mapping method for registration.
  • the screen mapping method is to display an iconic pattern on the screen, in one example of the present invention, a crosshair of an adjustable position displayed on the screen.
  • the screen is kept stationary, and the position of the crosshair is adjusted by the button so that the horizontal line coincides with the upper edge of the image obtained by the color imaging device 8, the vertical line coincides with the left edge of the image, the scan line equation is calculated, and the coordinates of the intersection of the cross line are calculated.
  • the scan line equation is calculated again, and the coordinates of the intersection point are calculated.
  • the correspondence relationship between the color imaging coordinates and the screen coordinates can be obtained, and the horizontal and vertical lines are moved, so that The intersection of the horizontal and vertical lines is close to the spectral band of a slit at the upper left position of the image.
  • the spectral acquisition optical path by adjusting the position of the cross line, the spectral band in the image acquired by the grayscale imaging device 6 is in a bright state, the position of the mask hole is recorded, and then the brightest spectrum can be determined by computer scanning.
  • the position of the strip is calculated, and the coordinates of the intersection of the cross line are calculated, thereby determining the correspondence between the mask hole and the screen coordinates, and the position of the cross line is repeatedly adjusted to determine the mapping relationship between each mask and the screen coordinates. Then, based on the mapping relationship between the previous screen and the color imaging device, a mapping relationship between the two optical paths is established, and the accuracy of the mapping can be at the sub-pixel level in the example of the present invention.
  • the spectral light path collecting device 7 and the color light path collecting device 9 can use an embedded development board for data storage.
  • the GPU in the development board can be used for accelerated image processing, and the data is transmitted to the information joint processing through the wireless signal.
  • the device 10 can also perform system operations using suitable combination gates.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules, if implemented in the form of software functional modules and sold or used as stand-alone products, may also be stored in a computer readable storage medium.
  • Figure 3 shows an embodiment of an all-optical information acquisition system showing the relative position between system components.
  • the elements in the present invention are not limited to the elements in the embodiments, and elements having similar or substituted functions can also be applied in the present invention, and the mask can be simulated using a programmable SLM.
  • FIG. 4 is a schematic diagram of all-optical information acquisition in one embodiment.
  • Two channels of data are acquired simultaneously by the two optical path collecting devices, the spectral optical path obtains data of the sparse sampling spectrum, and the color camera optical path obtains a color image.
  • the position on the color image corresponding to each mask hole is determined, and the two images are registered.
  • the spectral information of the pixel points to which the spectral data is not acquired is obtained according to the spectral propagation algorithm.
  • the depth data is jointly obtained according to the internal and external parameters of the camera.
  • the all-optical information of the scene can be reconstructed by combining the depth data and the spectral data.
  • the flowchart describes a method for data acquisition and acquisition reconstruction, but is not limited to being obtained in a fixed manner.
  • the above embodiment may be implemented by a program to instruct related hardware, and the program may be stored in a computer readable storage medium.

Landscapes

  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

一种全光信息采集系统,包括:由稀疏采样成像阵列(1)和分光装置(5)、灰度成像装置(6)以及光谱光路采集装置(7)获取稀疏采样的光谱图像;由彩色成像装置(8)、彩色光路采集装置(9)获取高分辨率的彩色图像;光谱光路采集装置(7)和彩色成像装置(8)所在的光轴平行,获得的两幅图像分别包含稀疏采样的光谱信息和高分辨率的彩色信息,并且获得的两幅图像包含有视差信息;最终由信息联合处理装置(10)对两路信息进行处理,根据稀疏光谱信息和彩色信息以及两个相机的姿态不同所造成的视差信息重建出包括光谱和深度的全光信息。可以实现全光信息的联合获取,通过采用更高精度的成像设备,可以获得更高的光谱分辨率和更精确的深度信息。

Description

一种全光信息采集系统 技术领域
本发明涉及计算机摄像学领域,特别涉及到一种全光信息采集的系统。
背景技术
计算摄影学是一门将计算机视觉、数字信号处理、图形学等深度交叉的新兴学科,旨在结合计算、数字传感器、光学系统和智能光照等技术,从成像机理上来改进传统相机,并将硬件设计与软件计算能力有机结合,突破经典成像模型和数字相机的局限性,增强或者扩展传统数字相机的数据采集能力,全方位地捕捉真实世界的场景信息。开展场景全光信息的相关研究工作对于3D重建、数字娱乐、安全侦察等领域有重要意义。
传统的数字摄像学是对高维场景信号(常用七维全光函数表示)的二维投影子空间进行采样,将高维场景信息投影到相机二维采样子空间上进行采集。7维全光函数描述了人眼观看客观世界光线的基本要素:包括任一时刻(t)的视点位置(x,y,z)、光线角度和波长(λ)这7维变量。从全光函数出发,可以看出,传统的数字摄像学会产生全光函数其他维度上信息的丢失与耦合问题,包括角度信息丢失、场景深度信息丢失、多光谱信息丢失、曝光时间内场景信息积分耦合等等。
计算摄像学中一个热门研究方向是在光谱域上对于传统的成像技术进行扩展,即高光谱技术。从多光谱与视觉的技术原理来说,人眼球中有三种不同视锥细胞对光谱中不同波段的信号进行感应,使真实场景中的光线以红、绿、蓝三种颜色的形式被人所感知,而与此相对应,传统意义上的相机也是从人眼的认知原理出发,它能够过电荷耦合原件不同的颜色积分曲线去捕捉场景中的红绿蓝三通道信息。但是实际上,包含角度,场景深度,光谱等等高维度的信息仅仅用RGB三个通道来代替,会失去大量细节,而这些包括深度与高光谱的丰富细节往往能够揭示物体和场景光线的很多特质,也能在很多计算机视觉领域的工作获得长足的进展。目前的相机中只有单一多光谱的相机或者是单一深度信息的相机,对于能联合采集场景深度信息、多光谱信息、角度信息的相机装置尚未诞生,而这样一种能进行全光信息采集的系统可以极大促进计算摄像学领域中针对场景的信息重构的研究。
根据技术要求和采集条件的不同,现有的采集系统大多是单功能的。如所熟知的光谱分析仪、扫描式光谱成像仪和单次拍摄成像光谱仪等,都是通过牺牲空间或者时间分辨率的方式对于光谱分辨率进行补偿,以采集多光谱信息,而且也只能采集到光谱信息却丢失了深度信息。而基于双目立体视觉的多种深度信息捕获装置却又无法满足对光谱获取的需求。2014年7月,一种高分辨率光谱视频采集研究系统被提出(公开号:CN102735338A),其在牺牲空间分辨率获得附加光谱分辨率的同时,使用双路采集的技术,对场景进行双路采集,从得到的多路数据中重构出高时空分辨率的高光谱视频,实现了一种高光谱采集技术。但是该系统依然欠缺场景中极其重要的深度信息。随着近几年深度信息在识别上的重要性越来越大,通过改进得到一款全光信息的装置,这方面的研究非常重要而且有广泛的应用。
发明内容
针对上述现有技术中存在的缺陷,本发明的目的在于提出一种全光信息采集系统,可以实现高光谱分辨率与深度信息的同时捕获。
为达到上述目的,本发明采用以下技术方案:
一种全光信息采集系统,包括稀疏采样成像阵列、分光装置、灰度成像装置、光谱光路采集装置、彩色成像装置、彩色光路采集装置和信息联合处理装置;其中,稀疏采样成像阵列、分光装置、灰度成像装置和光谱光路采集装置的光学中心同位于一条光轴上,彩色成像装置和彩色光路采集装置的光学中心同位于另外一条平行光轴上;所述稀疏采样成像阵列将场景光线经过汇聚透射生成透射光线,并对场景进行空间采样;所述透射光线经过分光装置色散为多个波长上的光谱,然后通过灰度成像装置进行成像,最后由光谱光路采集装置采集图像并传输到信息联合处理装置;所述彩色成像装置和彩色光路采集装置获得彩色图像后传输到所述信息联合处理装置,所述信息联合处理装置根据获得的两路图像并结合灰度成像装置和彩色成像装置的相对位置进行信息处理,采用双边滤波重建光谱,采用视差算法重建深度信息,从而得到场景的全光信息。
所述稀疏采样成像阵列包括依次排列的第一透镜、掩膜和第二透镜,其中第一透镜和第二透镜为单个透镜或透镜组,掩膜为稀疏采样装置。所述掩膜的开孔尺寸使得通过掩膜孔的光线经过所述分光装置之后在所述灰度成像装置上不会 出现混叠。所述第二透镜的孔径大小可以通过调整以覆盖所拍摄的整个场景范围。
进一步地,所述第一透镜和第二透镜采用消球差透镜组。
优选地,两条平行光轴之间的距离为5-10cm。
所述光谱光路采集装置和彩色光路采集装置同步采集信息。
所述分光装置为阿米西棱镜。
进一步地,所述灰度成像装置和彩色成像装置在拍摄图像之前需要采用屏幕映射法先进行配准,具体方法为:在屏幕上显示出有标志性的图案,保持屏幕不动,通过彩色成像装置拍摄屏幕上的图案,可以获得彩色成像坐标和屏幕坐标的对应关系,之后利用灰度成像装置拍摄图案,通过匹配灰度成像装置拍摄的图片的特征点和屏幕显示的图案的特征点,确定灰度成像坐标和屏幕坐标的对应关系,根据获得的两组对应关系,从而建立起灰度成像装置和彩色成像装置之间的映射关系。
进一步地,所述光谱光路采集装置和彩色光路采集装置均采用嵌入式开发板进行数据存储
本发明的全光信息采集系统可以实现包括光谱和深度的全光信息的联合获取,能够弥补传统成像设备缺乏高分辨率光谱信息和深度信息的问题。通过调整系统装置的参数,可以拥有更大的景深和通光量;通过采用更高精度的成像设备,可以获得具有重大意义的场景重建所需的高光谱分辨率和更精确的深度信息。
附图说明
图1为本发明全光信息采集系统的结构框图;
图2为本发明实施例中的掩膜设计图;
图3为本发明实施例中全光信息采集系统的示意图;其中11-望远镜,12-掩膜板,13-会聚目镜,14-分光棱镜,15-灰度相机,16-彩色相机,17-计算机;
图4为本发明全光信息采集系统的工作流程图。
具体实施方式
下面通过参考附图描述的实施例是示例性的,旨在用于解释本发明,而不能理解为对本发明的限制。
如图1,本发明的全光信息采集系统由第一透镜2、掩膜3、第二透镜4构 成稀疏采样成像阵列1。由稀疏采样成像阵列1、分光装置5、灰度成像装置6、光谱光路采集装置7获取低分辨率的光谱图像。彩色成像装置8、彩色光路采集转置9用以获取高分辨率的彩色图像。由信息联合处理装置10对两路信息进行处理,重建出包括高分辨率光谱和精确深度的全光信息。
稀疏采样成像阵列1中,掩膜3位于第一透镜2和第二透镜3之间的成像平面处,用于将场景光线经过汇聚透射以生成透射光线,并对场景进行空间采样;对第一透镜2进行选择时,透镜的孔径决定光通量,焦距决定景深,通过改变焦距以对所述场景的不同景深范围进行成像。在本发明的实施例中,选取的透镜焦距为150mm,孔径为50mm。
掩膜3的选择是在保证不出现光谱混叠的情况下最大密度的进行开孔,开孔的大小是根据光通量和光谱精度协调的最优结果,其中需要保证通过掩膜孔的光线经过分光装置5之后在灰度成像装置6上不会出现混叠。在本发明的实例中掩膜孔如图2所示,其中每个掩膜孔的宽度为0.025mm,高度为0.15mm,掩膜孔横向间距为0.9mm,纵向间距为0.20mm。
第二透镜4的选择是根据整个场景的大小,保证孔径的大小可以覆盖场景的范围,场景范围主要取决于第二透镜4和灰度成像装置6之间的距离,为了保证成像效果,对于平行光入射的情况,需要保证从第二透镜4出射的光仍然是平行光。并且需要分光装置5可以放置在成像装置6和第二透镜4之间。在本发明的实例中第二透镜的焦距是100mm,孔径为50mm。
分光装置5位于稀疏采样成像阵列1的后端,为阿米西棱镜,用于将通过成像阵列装置的透射光线色散为多个波长上的光谱。本发明的实例中采用的是u14050一组棱镜组合而成的阿米西棱镜,包括两块冕玻璃和一块火石玻璃组成,对于同等波长来说,火石玻璃的折射率更强。由于阿米西棱镜对于平行入射的光线仍然可以几乎平行的出射,从而有效的降低了成像畸变。实例中采用的阿米西棱镜参数见表1。
表1是阿米西棱镜采用的一组棱镜的形状和折射率数据表
Figure PCTCN2016083238-appb-000001
Figure PCTCN2016083238-appb-000002
灰度成像装置6用于获得分光后的光谱图像。本发明的实例是拍摄近乎平行入射的物体。从而第二透镜4的出射光也是平行进入灰度成像装置6,灰度成像装置6采用的是50mm焦距的CCD灰度相机。成像大小为2016*2016像素,像元大小为3.1μm,其中像元大小,焦距决定光谱分辨率的大小,像元越小,焦距越大,光谱分辨率越大。本实施中,第一透镜2和第二透镜4采用的是消球差透镜组,因为透镜加工的固有问题,导致边缘的成像会出现镜像和切向畸变,为了保证成像质量,灰度成像装置6拍摄的区域为两个透镜的中心区域,并且需要降低灰度成像装置6的通光亮,从而加大景深,减少成像畸变。
稀疏采样成像阵列1、分光装置5、灰度成像装置6、光谱光路采集装置7位于同一条光轴上,构成光谱采集光路,保证光轴穿过上述装置的光学中心。彩色成像装置8和彩色光路采集装置9在同一光轴上,构成彩色成像光路,并且保证光轴穿过上述装置的的光学中心。光谱采集光路所在的光轴和彩色成像光路所在的光轴保持平行,两条光轴位于同一水平面上。两条光轴之间的距离决定了全光信息中深度信息的重建精度。如果光轴的距离太小造成视差太小,这样可以提高全光信息中的光谱信息容量但是会降低重建的精度。距离太大会造成两幅图像重合的区域太小,会减少重建的全光信息的成像范围。在本发明的实例中,两条光轴的距离为100mm。
本发明的实例中彩色成像装置8为50mm焦距的CCD彩色相机。分辨率为2016*2016,光轴平行放置保证两个光路获得的图像存在视差,而且便于获得灰度相机和彩色相机的相对姿态。光谱光路采集装置7和彩色光路采集装置9将两路图像同时传输到信息联合处理装置10,根据稀疏光谱信息和彩色信息以及两个相机的姿态重建出包括深度和光谱的全光信息。
本实施例的全光信息采集系统的采集过程包括如下步骤:将场景光线经过稀疏采样成像阵列1一次汇聚透射以生成透射光线,且利用采样掩膜3对场景进行空间采样,透射光线入射至位于棱镜分光装置5;所述棱镜分光装置5将透射光线色散为多个波长上的光谱,这些光谱通过位于棱镜分光装置5后端的灰度成 像装置6进行成像以生成光谱采集光路;利用彩色成像装置8获得彩色成像光路。光谱光路采集装置7和彩色成像装置8所在的光轴平行,获得的两幅图像分别包含稀疏采样的光谱信息和高分辨率的彩色信息,并且获得的两幅图像包含有视差信息;最终由信息联合处理装置10将光谱采集光路和彩色成像光路的信息进行处理,根据稀疏光谱信息和彩色信息以及两个相机的姿态不同造成的视差信息,从而得到场景的全光信息。
其中,采用双边滤波重建光谱:灰度和彩色这两幅图像配准后,对于稀疏采样点的像素既知道光谱信息,也知道其RGB像素值。而对于其他像素,只知道RGB信息,不知道光谱信息,为了使得每个像素都得到光谱信息,利用稀疏采样点和其周围彩色像素之间在颜色空间和位置上的相似性进行传播,从而重建出高精度的光谱图像。对于深度信息,采用视差算法重建深度信息:利用两幅图像中对应点的位置关系,因为两个相机的位置关系不同,所以获得的两幅图像存在视差,根据稀疏采样点和RGB图像对应点的视差关系,利用立体匹配的方法,获得这些点的深度,利用传播算法重建出除采样点之外的其他像素点的深度信息。从而得到场景的全光信息。
两路成像装置在拍摄同一个场景时需要保证同步拍摄,在本发明的实例中,采用的是硬同步,即外界触发信号。对于精度要求不高,或者场景固定的情况下可以采用软同步。
本发明在两条光路配准之后可以根据两条光路获得的信息联合计算获得全光信息,其中两路成像装置之间的配准非常重要,因为光谱光路的成像是稀疏采样的光谱图像,彩色光路获取是高分辨率的彩色图像,因此传统的相机配准方法无法满足要求。本发明采用的是屏幕映射法进行配准。
屏幕映射法是在屏幕上显示出有标志性的图案,在本发明的一个实例中,在屏幕上显示的一个可调整位置的十字线。保持屏幕不动,通过按钮调整十字线的位置,使得横线和彩色成像装置8获得的图像的上边缘重合,纵线与图像左边缘重合,计算扫描线方程,计算十字线交点的坐标。同样的方法,通过移动屏幕上的扫描线使得横线与上述获得图像的下边缘重合,纵线与图像的右边缘重合,再次计算扫描线方程,计算出交点的坐标。通过上述获取的坐标点位置和图像的长宽的像素值,可以获得彩色成像坐标和屏幕坐标的对应关系,移动横竖线,使得 横竖线交叉点靠近图像左上位置某个狭缝的光谱带。对于光谱获取光路,通过调整十字交叉线的位置,使灰度成像装置6获取的图像中该光谱带处于较亮的状态,记下掩膜孔的位置,之后可以通过计算机扫描确定出最亮光谱带的位置,计算出十字交叉线的交点坐标,从而确定掩膜孔和屏幕坐标的对应关系,重复调整十字线的位置,确定每一个掩膜和屏幕坐标的映射关系。之后根据之前的屏幕和彩色成像装置之间的映射关系,从而建立起两个光路之间的映射关系,在本发明的实例中映射的精度可以到亚像素级别。
光谱光路采集装置7和彩色光路采集装置9可以采用嵌入式开发板进行数据存储,在本发明的实例中,可以利用开发板中的GPU进行加速图像处理,通过无线信号将数据传输到信息联合处理装置10,也可以采用适合的组合门电路来完成系统运算。
本发明的系统可以为其单独提供电源,从而摆脱场地的限制,实现在室外进行数据拍摄。此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
图3显示的是全光信息采集系统的一个实施例,显示出系统元件之间的相对位置。本发明中的元件并非限定在实施例中的元件,有相似或者取代功能的元件同样可以应用在本发明中,掩膜可以采用可编程的SLM来模拟。
图4为一个实施例中全光信息获取的示意图,通过两条光路的采集装置,同时获得两路数据,光谱光路获得稀疏采样光谱的数据,彩色相机光路获得彩色图像。通过标定,确定每个掩膜孔对应的彩色图像上的位置,将两幅图像进行配准。通过对齐好的数据,根据光谱传播算法获得未采集到光谱数据的像素点的光谱信息。利用稀疏采样的图像和彩色图像的位置关系,根据相机的内外参数,从而联合求得深度数据。最后结合深度数据和光谱数据即可重建出场景的全光信息。
流程图描述了数据获取采集重建的方法,但是并不限于采用固定的方式获得,上述实施例方案可以通过程序来指令相关的硬件完成,所述的程序可以存于计算机可读存储介质中。

Claims (10)

  1. 一种全光信息采集系统,其特征在于,包括稀疏采样成像阵列、分光装置、灰度成像装置、光谱光路采集装置、彩色成像装置、彩色光路采集装置和信息联合处理装置;其中,稀疏采样成像阵列、分光装置、灰度成像装置和光谱光路采集装置的光学中心同位于一条光轴上,彩色成像装置和彩色光路采集装置的光学中心同位于另外一条平行光轴上;所述稀疏采样成像阵列将场景光线经过汇聚透射生成透射光线,并对场景进行空间采样;所述透射光线经过分光装置色散为多个波长上的光谱,然后通过灰度成像装置进行成像,最后由光谱光路采集装置采集图像并传输到信息联合处理装置;所述彩色成像装置和彩色光路采集装置获得彩色图像后传输到所述信息联合处理装置,所述信息联合处理装置根据获得的两路图像并结合灰度成像装置和彩色成像装置的相对位置进行信息处理,采用双边滤波重建光谱,采用视差算法重建深度信息,从而得到场景的全光信息。
  2. 如权利要求1所述的一种全光信息采集系统,其特征在于,所述稀疏采样成像阵列包括依次排列的第一透镜、掩膜和第二透镜,其中第一透镜和第二透镜为单个透镜或透镜组,掩膜为稀疏采样装置。
  3. 如权利要求2所述的一种全光信息采集系统,其特征在于,所述掩膜的开孔尺寸使得通过掩膜孔的光线经过所述分光装置之后在所述灰度成像装置上不会出现混叠。
  4. 如权利要求2所述的一种全光信息采集系统,其特征在于,所述第二透镜的孔径大小可以通过调整以覆盖所拍摄的整个场景范围。
  5. 如权利要求2所述的一种全光信息采集系统,其特征在于,所述第一透镜和第二透镜采用消球差透镜组。
  6. 如权利要求1所述的一种全光信息采集系统,其特征在于,两条平行光轴之间的距离为5-10cm。
  7. 如权利要求1所述的一种全光信息采集系统,其特征在于,所述光谱光路采集装置和彩色光路采集装置同步采集信息。
  8. 如权利要求1所述的一种全光信息采集系统,其特征在于,所述分光装置为阿米西棱镜。
  9. 如权利要求1至8之一所述的一种全光信息采集系统,其特征在于,所述灰度成像装置和彩色成像装置在拍摄图像之前需要采用屏幕映射法先进行配 准,具体方法为:在屏幕上显示出有标志性的图案,保持屏幕不动,通过彩色成像装置拍摄屏幕上的图案,可以获得彩色成像坐标和屏幕坐标的对应关系,之后利用灰度成像装置拍摄图案,通过匹配灰度成像装置拍摄的图片的特征点和屏幕显示的图案的特征点,确定灰度成像坐标和屏幕坐标的对应关系,根据获得的两组对应关系,从而建立起灰度成像装置和彩色成像装置之间的映射关系。
  10. 如权利要求1至8之一所述的全光信息采集系统,其特征在于,所述光谱光路采集装置和彩色光路采集装置均采用嵌入式开发板进行数据存储。
PCT/CN2016/083238 2016-01-13 2016-05-25 一种全光信息采集系统 WO2017121058A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610025605.5A CN105651384B (zh) 2016-01-13 2016-01-13 一种全光信息采集系统
CN201610025605.5 2016-01-13

Publications (1)

Publication Number Publication Date
WO2017121058A1 true WO2017121058A1 (zh) 2017-07-20

Family

ID=56487566

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/083238 WO2017121058A1 (zh) 2016-01-13 2016-05-25 一种全光信息采集系统

Country Status (2)

Country Link
CN (1) CN105651384B (zh)
WO (1) WO2017121058A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880162A (zh) * 2019-11-22 2020-03-13 中国科学技术大学 基于深度学习的快照光谱深度联合成像方法及系统
CN113425259A (zh) * 2021-06-24 2021-09-24 中国科学院上海微系统与信息技术研究所 一种高空间分辨率的多光谱舌象采集系统
WO2022042084A1 (zh) * 2020-08-31 2022-03-03 清华大学深圳国际研究生院 一种高分辨率光谱图像快速获取装置及方法
CN114659635A (zh) * 2022-05-18 2022-06-24 天津师范大学 一种基于像面分割光场的光谱深度成像装置及方法
CN115082533A (zh) * 2022-06-28 2022-09-20 北京航空航天大学 一种基于自监督的临近空间遥感图像配准方法
CN115082533B (zh) * 2022-06-28 2024-05-28 北京航空航天大学 一种基于自监督的临近空间遥感图像配准方法

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106657970A (zh) * 2016-10-25 2017-05-10 乐视控股(北京)有限公司 一种深度图成像装置
CN106896069B (zh) * 2017-04-06 2019-05-10 武汉大学 一种基于彩色数码相机单幅rgb图像的光谱重建方法
CN107169921B (zh) * 2017-04-26 2020-04-28 国网上海市电力公司 一种双光谱的图像配准系统和方法
CN110462679B (zh) * 2017-05-19 2022-12-09 上海科技大学 快速多光谱光场成像方法和系统
CN107655571B (zh) * 2017-09-19 2019-11-15 南京大学 一种基于色散模糊的光谱成像系统及其光谱重建方法
CN108254072A (zh) * 2017-12-29 2018-07-06 中国科学院上海技术物理研究所杭州大江东空间信息技术研究院 一种新型高光谱视频成像仪
CN112229827B (zh) * 2020-09-07 2022-02-08 南京大学 一种实时的多光谱层析拍摄方法和装置
CN113029335B (zh) * 2021-02-05 2023-10-20 中北大学 面向火焰环境稀疏空间频率光线的识别提取系统与方法
CN113049103B (zh) * 2021-03-12 2022-06-07 西安电子科技大学 基于dmd可变编码模板的光谱视频采集方法
CN113687369A (zh) * 2021-07-14 2021-11-23 南京大学 一种光谱信息与深度信息同步采集系统及方法
CN113701880B (zh) * 2021-07-16 2022-12-09 南京大学 一种高光通量光谱编码成像系统与方法

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000013423A1 (en) * 1998-08-28 2000-03-09 Sarnoff Corporation Method and apparatus for synthesizing high-resolution imagery using one high-resolution camera and a lower resolution camera
CN101996396A (zh) * 2010-09-16 2011-03-30 湖南大学 一种基于压缩传感理论的卫星遥感图像融合方法
CN102609930A (zh) * 2012-01-20 2012-07-25 中国科学院自动化研究所 一种基于多方向梯度场的图像融合方法
CN102999892A (zh) * 2012-12-03 2013-03-27 东华大学 基于区域遮罩的深度图像与rgb图像的智能融合方法
CN103208102A (zh) * 2013-03-29 2013-07-17 上海交通大学 一种基于稀疏表示的遥感图像融合方法
CN103609102A (zh) * 2011-06-15 2014-02-26 微软公司 高分辨率多光谱图像捕捉
CN104851113A (zh) * 2015-04-17 2015-08-19 华中农业大学 多分辨率遥感影像的城市植被自动提取方法
CN104867124A (zh) * 2015-06-02 2015-08-26 西安电子科技大学 基于对偶稀疏非负矩阵分解的多光谱与全色图像融合方法
CN105157833A (zh) * 2015-04-27 2015-12-16 奉化科创科技服务有限公司 多光谱成像数据处理系统

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2709827B1 (fr) * 1993-09-07 1995-10-06 Thomson Csf Dispositif optique d'imagerie permettant l'analyse spectrale d'une scène.
CN102735338B (zh) * 2012-06-20 2014-07-16 清华大学 基于掩膜与双阿米西棱镜的高分辨率多光谱采集系统
CN104316179B (zh) * 2014-08-27 2016-06-01 北京空间机电研究所 一种光谱压缩的超光谱成像系统

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2000013423A1 (en) * 1998-08-28 2000-03-09 Sarnoff Corporation Method and apparatus for synthesizing high-resolution imagery using one high-resolution camera and a lower resolution camera
CN101996396A (zh) * 2010-09-16 2011-03-30 湖南大学 一种基于压缩传感理论的卫星遥感图像融合方法
CN103609102A (zh) * 2011-06-15 2014-02-26 微软公司 高分辨率多光谱图像捕捉
CN102609930A (zh) * 2012-01-20 2012-07-25 中国科学院自动化研究所 一种基于多方向梯度场的图像融合方法
CN102999892A (zh) * 2012-12-03 2013-03-27 东华大学 基于区域遮罩的深度图像与rgb图像的智能融合方法
CN103208102A (zh) * 2013-03-29 2013-07-17 上海交通大学 一种基于稀疏表示的遥感图像融合方法
CN104851113A (zh) * 2015-04-17 2015-08-19 华中农业大学 多分辨率遥感影像的城市植被自动提取方法
CN105157833A (zh) * 2015-04-27 2015-12-16 奉化科创科技服务有限公司 多光谱成像数据处理系统
CN104867124A (zh) * 2015-06-02 2015-08-26 西安电子科技大学 基于对偶稀疏非负矩阵分解的多光谱与全色图像融合方法

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110880162A (zh) * 2019-11-22 2020-03-13 中国科学技术大学 基于深度学习的快照光谱深度联合成像方法及系统
WO2022042084A1 (zh) * 2020-08-31 2022-03-03 清华大学深圳国际研究生院 一种高分辨率光谱图像快速获取装置及方法
CN113425259A (zh) * 2021-06-24 2021-09-24 中国科学院上海微系统与信息技术研究所 一种高空间分辨率的多光谱舌象采集系统
CN114659635A (zh) * 2022-05-18 2022-06-24 天津师范大学 一种基于像面分割光场的光谱深度成像装置及方法
CN115082533A (zh) * 2022-06-28 2022-09-20 北京航空航天大学 一种基于自监督的临近空间遥感图像配准方法
CN115082533B (zh) * 2022-06-28 2024-05-28 北京航空航天大学 一种基于自监督的临近空间遥感图像配准方法

Also Published As

Publication number Publication date
CN105651384A (zh) 2016-06-08
CN105651384B (zh) 2018-01-16

Similar Documents

Publication Publication Date Title
WO2017121058A1 (zh) 一种全光信息采集系统
CN106840398B (zh) 一种多光谱光场成像方法
TWI525382B (zh) 包括至少一拜耳型攝影機的攝影機陣列系統及關聯的方法
KR101824290B1 (ko) 고해상도 멀티스펙트럼 이미지 캡처 기법
CN108055452A (zh) 图像处理方法、装置及设备
CN107800965B (zh) 图像处理方法、装置、计算机可读存储介质和计算机设备
US10438365B2 (en) Imaging device, subject information acquisition method, and computer program
Genser et al. Camera array for multi-spectral imaging
US11779210B2 (en) Ophthalmic imaging apparatus and system
CN106165398B (zh) 摄像元件、摄像装置以及图像处理装置
WO2019184185A1 (zh) 目标图像获取系统与方法
WO2019184184A1 (zh) 目标图像获取系统与方法
CN109889799B (zh) 基于rgbir摄像头的单目结构光深度感知方法及装置
EP3756161A1 (en) Method and system for calibrating a plenoptic camera system
CN110533709A (zh) 深度图像获取方法、装置及系统、图像采集设备
CN108805921A (zh) 图像获取系统及方法
KR20180000696A (ko) 적어도 하나의 라이트필드 카메라를 사용하여 입체 이미지 쌍을 생성하는 방법 및 장치
CN104754316B (zh) 一种3d成像方法、装置及成像系统
CN109084679A (zh) 一种基于空间光调制器的3d测量及获取装置
CN105681650B (zh) 一种光场相机消色差方法
JP2002218510A (ja) 3d画像取得装置
KR102184210B1 (ko) 3차원 카메라 시스템
CN111258166B (zh) 摄像模组及其潜望式摄像模组和图像获取方法及工作方法
CN205679316U (zh) 一种基于自适应微透镜阵列传感器的数字变焦光谱成像仪
CN110796726A (zh) 一种三维成像方法、装置及终端设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16884617

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16884617

Country of ref document: EP

Kind code of ref document: A1