CN111340929A - Non-vision field imaging method based on ray tracing algorithm - Google Patents

Non-vision field imaging method based on ray tracing algorithm Download PDF

Info

Publication number
CN111340929A
CN111340929A CN202010104486.9A CN202010104486A CN111340929A CN 111340929 A CN111340929 A CN 111340929A CN 202010104486 A CN202010104486 A CN 202010104486A CN 111340929 A CN111340929 A CN 111340929A
Authority
CN
China
Prior art keywords
image
diffuse reflection
matrix
scene
nlos
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010104486.9A
Other languages
Chinese (zh)
Other versions
CN111340929B (en
Inventor
张宇宁
吴术孔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010104486.9A priority Critical patent/CN111340929B/en
Publication of CN111340929A publication Critical patent/CN111340929A/en
Application granted granted Critical
Publication of CN111340929B publication Critical patent/CN111340929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Analysis (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computing Systems (AREA)
  • Computational Mathematics (AREA)
  • Computer Graphics (AREA)
  • Algebra (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a non-visual field imaging method based on a ray tracing algorithm, which comprises the steps of utilizing a common camera to shoot diffuse reflection information generated on a diffuse reflection surface of a non-visual field scene, analyzing a path of rays corresponding to each pixel in a diffuse reflection image from an NLOS scene to the camera through the algorithm by means of the thought of the ray tracing algorithm, carrying out reverse calculation on the propagation path, and combining a Bidirectional Reflectance Distribution Function (BRDF) of the diffuse reflection surface so as to reversely deduce an image of an NLOS scene object. The method does not need other expensive equipment and instruments, and greatly reduces the equipment cost compared with a non-visual field imaging method which needs to emit laser and complex imaging equipment.

Description

一种基于光线跟踪算法的非视域成像方法A non-horizontal imaging method based on ray tracing algorithm

技术领域technical field

本发明公开了一种基于光线跟踪算法的非视域成像方法,属于计算成像领域。The invention discloses a non-view field imaging method based on a ray tracing algorithm, which belongs to the field of computational imaging.

背景技术Background technique

随着成像设备的不断增加,成像方式也在不断丰富化,自2009年出现以激光器和基于门控的ICCD作为成像设备的非视域成像方法出现以来,非视域成像开始显现出在军事探测、危险场景探测以及自动驾驶相关的汽车盲区探测方面的应用潜力。随着研究人员的不断创新,非视域成像的方法不断增加,但大部分都摆脱不了激光器作为主动光源,在很大程度上限制了非视域成像在实验室以外的应用。直到近两年开始出现被动非视域成像的方式,非视域成像再一次迸发出在探测方面的应用潜力。With the continuous increase of imaging devices, imaging methods are also constantly enriched. Since the emergence of non-horizontal imaging methods using lasers and gated-based ICCDs as imaging devices in 2009, non-horizontal imaging has begun to appear in military detection. , detection of dangerous scenes, and detection of blind spots in vehicles related to autonomous driving. With the continuous innovation of researchers, non-line-of-sight imaging methods continue to increase, but most of them cannot get rid of lasers as the active light source, which largely limits the application of non-line-of-sight imaging outside the laboratory. Until the passive non-horizontal imaging method began to appear in the past two years, non-horizontal imaging once again burst out the application potential in detection.

发明内容SUMMARY OF THE INVENTION

本发明的目:针对上述现有技术存在的问题和不足,本发明提出了一种基于光线跟踪算法的非视域成像方法,不使用激光等主动光源,成本较低,而且在实验室以外的环境仍可实现应用。Purpose of the present invention: Aiming at the problems and deficiencies of the above-mentioned prior art, the present invention proposes a non-visual field imaging method based on a ray tracing algorithm, which does not use active light sources such as lasers, has low cost, and can be used outside the laboratory. The environment can still implement the application.

技术方案:一种基于光线跟踪算法的非视域成像方法,包括以下步骤:Technical solution: a non-visual field imaging method based on a ray tracing algorithm, comprising the following steps:

步骤1:采用相机获取NLOS场景在漫反射面上产生的漫反射图像;Step 1: Use the camera to obtain the diffuse reflection image generated by the NLOS scene on the diffuse reflection surface;

步骤2:采用逆透视变换和重采样技术提取漫反射图像中对应实际空间中为矩形的区域内的图像并将该部分图像变换成指定分辨率的矩形图像;Step 2: using inverse perspective transformation and resampling technology to extract the image in the area corresponding to the rectangle in the actual space in the diffuse reflection image and transform this part of the image into a rectangular image with a specified resolution;

步骤3:确定步骤2得到的矩形图像的尺寸及空间位置,根据矩形图像内存在的像素点个数,得到每颗像素点中心的空间坐标;Step 3: Determine the size and spatial position of the rectangular image obtained in Step 2, and obtain the spatial coordinates of the center of each pixel according to the number of pixels existing in the rectangular image;

步骤4:确定待成像的NLOS场景所在区域的尺寸及空间位置;Step 4: Determine the size and spatial position of the area where the NLOS scene to be imaged is located;

步骤5:由步骤3得到的矩形图像的尺寸及空间位置和步骤4得到的待成像的NLOS场景所在区域的尺寸及空间位置建立光传输矩阵;所述光传输矩阵的维度取决于矩形图像的总像素数和待成像的NLOS场景的总像素数,行数为步骤1得到的漫反射图像的总像素数,列数为待成像的NLOS场景的总像素数;该光传输矩阵中的每个元素代表待成像的NLOS场景中的一颗像素信息到漫反射图像的一颗像素信息的转换关系,所述转换关系由光线跟踪算法中光辐射传播公式计算得到;Step 5: A light transmission matrix is established from the size and spatial position of the rectangular image obtained in step 3 and the size and spatial position of the area where the NLOS scene to be imaged obtained in step 4 is located; the dimension of the light transmission matrix depends on the total size of the rectangular image. The number of pixels and the total number of pixels of the NLOS scene to be imaged, the number of rows is the total number of pixels of the diffuse reflection image obtained in step 1, and the number of columns is the total number of pixels of the NLOS scene to be imaged; each element in the light transmission matrix Represents the conversion relationship of one pixel information in the NLOS scene to be imaged to one pixel information of the diffuse reflection image, and the conversion relationship is calculated by the light radiation propagation formula in the ray tracing algorithm;

步骤6:根据步骤2得到的矩形图像和步骤5得到的光传输矩阵建立优化问题,求解该优化问题得到NLOS场景图像。Step 6: An optimization problem is established according to the rectangular image obtained in step 2 and the light transmission matrix obtained in step 5, and the NLOS scene image is obtained by solving the optimization problem.

进一步的,在执行步骤3之前,可根据实际需求对矩形图像进行降采样。Further, before step 3 is performed, the rectangular image may be down-sampled according to actual requirements.

进一步的,所述待成像的NLOS场景所在区域的尺寸及空间位置与矩形图像的尺寸及空间位置在同一坐标系下确定。Further, the size and spatial position of the area where the NLOS scene to be imaged is located and the size and spatial position of the rectangular image are determined in the same coordinate system.

进一步的,所述转换关系采用转换矩阵T表示:Further, the conversion relationship is represented by a conversion matrix T:

Figure BDA0002388061830000021
Figure BDA0002388061830000021

式中,漫反射图像上的任意一颗像素点pi,其中i∈[1,m*n],来自待成像的NLOS场景图像上的任意一颗像素点p′j,其中j∈[1,x*y],ρ表示漫反射面的反射率,θ′表示入射光线跟NLOS场景表面法线的夹角,wi是入射光线的方向向量,wo是反射光线的方向向量,Lo(p,wo)表示在漫反射面p点处沿wo方向反射光线的亮度,θi是入射光方向向量与漫反射材料在p处的法向量的夹角,s为待求解的NLOS场景表面的面积;In the formula, any pixel p i on the diffuse reflection image, where i∈[1, m*n], comes from any pixel p′ j on the NLOS scene image to be imaged, where j∈[1 , x*y], ρ represents the reflectivity of the diffuse surface, θ′ represents the angle between the incident light and the normal of the NLOS scene surface, wi is the direction vector of the incident light, wo is the direction vector of the reflected light, L o (p, w o ) represents the brightness of the reflected light along the direction w o at the point p of the diffuse reflection surface, θ i is the angle between the incident light direction vector and the normal vector of the diffuse reflection material at p, s is the NLOS to be solved the area of the scene surface;

假设漫反射图像表示为图像矩阵d,待成像的NLOS场景表示为图像矩阵f,则关系表示如下:Assuming that the diffuse reflection image is represented as an image matrix d, and the NLOS scene to be imaged is represented as an image matrix f, the relationship is expressed as follows:

d=Tf (9)d=Tf (9)

进一步的,所述步骤6具体包括以下子步骤:Further, the step 6 specifically includes the following sub-steps:

判断转换矩阵T是否可逆,若可逆,则根据式(9)计算得到待成像的NLOS场景对应的图像矩阵f;Determine whether the transformation matrix T is invertible, if it is invertible, then calculate the image matrix f corresponding to the NLOS scene to be imaged according to formula (9);

f=T-1d (10)f = T -1 d (10)

若不可逆,则进行以下步骤:If it is irreversible, perform the following steps:

得到转换矩阵的伪逆

Figure BDA0002388061830000025
Figure BDA0002388061830000024
为f的最小二乘逼近;get the pseudo-inverse of the transformation matrix
Figure BDA0002388061830000025
Figure BDA0002388061830000024
is the least squares approximation of f;

根据漫反射图像类型建立对应的优化问题,并进行求解得到待成像的NLOS场景对应的图像矩阵f:Establish the corresponding optimization problem according to the type of diffuse reflection image, and solve it to obtain the image matrix f corresponding to the NLOS scene to be imaged:

f=arg min(||Tf-d||21||f||TV2B) (12)f=arg min(||Tf-d|| 21 ||f|| TV2 B) (12)

其中,||f||TV为图像矩阵f的总变差,其计算方式为:Among them, ||f|| TV is the total variation of the image matrix f, which is calculated as:

Figure BDA0002388061830000022
Figure BDA0002388061830000022

B为桶形函数,其计算方式为:B is a barrel function, which is calculated as:

Figure BDA0002388061830000023
Figure BDA0002388061830000023

其中,λ12为正则化系数,fi,j为图像矩阵f的i行j列像素值,x,y分别表示图像矩阵f的总行数和总列数。Among them, λ 1 , λ 2 are the regularization coefficients, f i, j are the pixel values of the i row and j column of the image matrix f, and x, y respectively represent the total number of rows and columns of the image matrix f.

进一步的,所述漫反射图像包括单通道灰度图像、RGB三通道彩色图像、拜尔滤波模式的RGBG四通道彩色图像中的任意一种;Further, the diffuse reflection image includes any one of a single-channel grayscale image, an RGB three-channel color image, and a Bayer filter mode RGBG four-channel color image;

若漫反射图像为单通道灰度图像,则建立一个优化问题,并求解得到NLOS场景图像对应的矩阵f;若漫反射图像为RGB三通道彩色图像,则将三通道的数据分离,对三个通道分别建立优化问题,求解得到三通道数据,将三通道数据进行组合得到NLOS场景图像对应的矩阵f;若漫反射图像为拜尔滤波模式的RGBG四通道彩色图像,则将四通道的数据分离,对两个G通道的数据进行平均得到三个通道的数据,针对三个通道建立优化问题,求解得到三通道数据,将三通道数据进行组合得到NLOS场景图像对应的矩阵f。If the diffuse reflection image is a single-channel grayscale image, an optimization problem is established, and the matrix f corresponding to the NLOS scene image is solved; if the diffuse reflection image is an RGB three-channel color image, the data of the three channels are separated, and the three The optimization problem is established for each channel, and the three-channel data is obtained by solving it. The three-channel data is combined to obtain the matrix f corresponding to the NLOS scene image; if the diffuse reflection image is an RGBG four-channel color image in Bayer filter mode, the four-channel data is separated. , the data of the two G channels are averaged to obtain the data of the three channels, an optimization problem is established for the three channels, the three-channel data are obtained by solving, and the matrix f corresponding to the NLOS scene image is obtained by combining the three-channel data.

有益效果:本发明的基于光线跟踪算法的非视域成像方法,相较于传统的主动式非视域成像方法,不需要激光的昂贵的主动光源以及特定的图像采集装置,同时相对于基于偏振信息的被动式成像的方式需要采集偏振信息采集装置和基于相干性信息的被动式成像的方式需要干涉仪采集相干性信息,本发明基于漫反射的强度信息只需要普通相机,大大提高了应用的可能性。Beneficial effects: Compared with the traditional active non-horizontal imaging method based on the ray tracing algorithm of the present invention, it does not require an expensive active light source of laser and a specific image acquisition device. The passive imaging method of information requires the acquisition of polarization information acquisition devices and the passive imaging method based on coherence information requires an interferometer to acquire coherence information. The intensity information based on diffuse reflection in the present invention only requires an ordinary camera, which greatly improves the possibility of application .

附图说明Description of drawings

图1为根据本发明实施例的非视域场景侧视图;1 is a side view of a non-visual field scene according to an embodiment of the present invention;

图2为根据本发明实施例的非视域场景俯视图;FIG. 2 is a top view of a non-visual field scene according to an embodiment of the present invention;

图3为亮度传播计算示意图;Fig. 3 is a schematic diagram of luminance propagation calculation;

图4为漫反射面接收来自半球空间入射光示意图;FIG. 4 is a schematic diagram of a diffuse reflection surface receiving incident light from a hemispherical space;

图5为漫反射面接收来自NLOS场景光线示意图;Figure 5 is a schematic diagram of the diffuse reflection surface receiving light from the NLOS scene;

图6为相机FOV内逆透视变换区域示意图;6 is a schematic diagram of an inverse perspective transformation area in the camera FOV;

图7为漫反射图像矩阵与NLOS场景图像矩阵示意图。FIG. 7 is a schematic diagram of the diffuse reflection image matrix and the NLOS scene image matrix.

具体实施方式Detailed ways

现结合附图和实施例进一步阐述本发明的技术方案。The technical solutions of the present invention will now be further described with reference to the accompanying drawings and embodiments.

本发明的基本思想为:现有的被动式非视域成像主要分为利用光线强度、偏振信息以及相干性信息三种原理,而本发明是一种利用漫反射强度信息来进行重建非视域场景的方法,无需借助昂贵的成像设备,只需一台普通的相机用来记录漫反射面上反射的非视域场景的强度信息(灰度或RGB信息),然后利用光线跟踪算法中光线传播的原理来推导漫反射图像中每颗像素对应的光线的反向传播过程,从而重建出非视域场景的图像。The basic idea of the present invention is as follows: the existing passive non-visual field imaging is mainly divided into three principles using light intensity, polarization information and coherence information, and the present invention is a non-visual field scene reconstruction using diffuse reflection intensity information. The method does not require expensive imaging equipment, only a common camera is used to record the intensity information (grayscale or RGB information) of the non-view field scene reflected on the diffuse surface, and then use the light propagation in the ray tracing algorithm. The principle is used to derive the back-propagation process of the light corresponding to each pixel in the diffuse reflection image, so as to reconstruct the image of the non-field of view scene.

因此,本实施例利用非视域场景在其前的漫反射面上所产生的投影,该投影信息因场景光源的各向散射和漫反射面的无规则反射,已经失去原场景的具体信息,但仍包含非视域场景的一部分信息,利用该信息并结合光线跟踪算法中模拟光线传播的路径及漫反射面的双向反射率分布函数,仍有可能恢复非视域场景原图像。Therefore, this embodiment utilizes the projection generated by the non-visual field scene on the diffuse reflection surface in front of it. The projection information has lost the specific information of the original scene due to the anisotropic scattering of the scene light source and the irregular reflection of the diffuse reflection surface. However, it still contains part of the information of the non-horizontal scene. Using this information combined with the path of simulated light propagation in the ray tracing algorithm and the bidirectional reflectance distribution function of the diffuse surface, it is still possible to restore the original image of the non-horizontal scene.

在光线跟踪算法中,来自漫反射面的反射的进入相机的光线可以用计算机图形学公式(1)来描述:In the ray tracing algorithm, the ray entering the camera from the reflection of the diffuse surface can be described by the computer graphics formula (1):

Figure BDA0002388061830000041
Figure BDA0002388061830000041

式中,p是漫反射材料上的一点,p′是空间中其他物体上的点,wi是入射光线的方向向量,wo是反射光线的方向向量,Lo(p,wo)表示在漫反射面p点处沿wo方向反射光线的亮度,fr(p,wi,wo)是漫反射材料在p点处入射光方向为wi反射光方向为wo时的双向反射率分布函数(BRDF)值,Li(p′,-wi)是从p′点发出(或反射的)并沿wi方向传播的光线的亮度,θi是入射光方向向量与漫反射材料在p处的法向量的夹角,dwi对2π立体角范围内进行积分,表示p点处的入射光来自于整个半球空间的所有入射光线的亮度,如图4所示。In the formula, p is a point on the diffuse reflection material, p' is a point on other objects in space, wi is the direction vector of the incident light, wo is the direction vector of the reflected light, and L o ( p, wo ) represents The brightness of the reflected light along the direction wo at the point p of the diffuse reflection surface, fr (p, wi , wo ) is the bidirectional direction of the diffuse reflection material when the direction of the incident light at point p is wi and the direction of the reflected light is wo The reflectivity distribution function (BRDF) value, Li (p', -wi ) is the brightness of the light emitted (or reflected) from point p' and propagated in the direction of wi, θ i is the incident light direction vector and the diffuse The included angle of the normal vector of the reflective material at p, dwi is integrated within the range of 2π solid angle, indicating that the incident light at point p comes from the brightness of all incident rays in the entire hemispherical space, as shown in Figure 4.

根据立体角的定义:According to the definition of solid angle:

Figure BDA0002388061830000042
Figure BDA0002388061830000042

可以将上述公式(1)对立体角的积分转换为面积积分:The integral of the above formula (1) over the solid angle can be converted to the area integral:

Figure BDA0002388061830000043
Figure BDA0002388061830000043

式中,θ′表示入射光线跟NLOS场景表面法线的夹角。而对于如图1所示的场景,可以假设环境中入射到漫反射面的光只有来自NLOS场景的光,忽略其他反射光。这是因为场景中如果只有NLOS场景中的物体发光,那么相对于直接来自NLOS场景物体的光,其他反射光要弱很多,所以在保证计算结果的情况下可以忽略其他反射光,这实际上是借助了光线跟踪算法中的重要性采样定理。所以如果只考虑直接来自NLOS场景的入射光线,如图5所示。那么上式(3)中的面积积分的范围就是NLOS场景的面积。而实际上由于重建NLOS场景图像时,假设了NLOS场景就是一幅二维矩形图像,所以公式(3)的积分区域最终就变成了一个矩形区域。为了便于计算和理解,可以假设NLOS场景就是一个显示屏上显示的图像,而非视域成像的任务就是利用漫反射信息重建这幅图像。where θ′ represents the angle between the incident ray and the surface normal of the NLOS scene. For the scene shown in Figure 1, it can be assumed that the light incident on the diffuse surface in the environment is only the light from the NLOS scene, ignoring other reflected light. This is because if only the objects in the NLOS scene emit light, the other reflected light is much weaker than the light directly from the objects in the NLOS scene, so other reflected light can be ignored under the condition that the calculation result is guaranteed, which is actually With the help of the importance sampling theorem in the ray tracing algorithm. So if only the incoming rays coming directly from the NLOS scene are considered, as shown in Figure 5. Then the range of the area integral in the above formula (3) is the area of the NLOS scene. In fact, when the NLOS scene image is reconstructed, it is assumed that the NLOS scene is a two-dimensional rectangular image, so the integral area of formula (3) eventually becomes a rectangular area. For ease of calculation and understanding, it can be assumed that the NLOS scene is an image displayed on a display screen, and the task of non-horizontal imaging is to reconstruct this image using diffuse reflectance information.

积分在实际编程计算中并不方便,此外由于光线跟踪算法是将每根光线离散化操作的。由此,借助微积分中的二维蒙特卡洛积分定理,将式(3)所示的面积积分转换为求和计算:Integration is inconvenient in actual programming calculations, in addition, because the ray tracing algorithm discretizes each ray. Therefore, with the help of the two-dimensional Monte Carlo integration theorem in calculus, the area integral shown in equation (3) is converted into a summation calculation:

Figure BDA0002388061830000051
Figure BDA0002388061830000051

在光线跟踪算法中,理想漫反射面的双向反射率分布函数是一个与入射光和反射光以及表面点都无关的常数:In the ray tracing algorithm, the bidirectional reflectance distribution function of an ideal diffuse surface is a constant independent of incident and reflected light and surface points:

Figure BDA0002388061830000052
Figure BDA0002388061830000052

其中ρ表示漫反射面的反射率,由此可以将公式(4)转换为一个更加简洁的形式:where ρ represents the reflectivity of the diffuse surface, from which Equation (4) can be converted into a more compact form:

Figure BDA0002388061830000053
Figure BDA0002388061830000053

对于公式(6),p点是漫反射面上的点,也就是相机所拍摄的漫反射图像中每颗像素点所对应的实际空间中的位置,p′点是NLOS场景上的点。上式(6)建立由待恢复的NLOS场景上的图像上的点的亮度(或RGB信息)到相机所拍摄的漫反射图像上的点的亮度之间的转换关系。For formula (6), point p is the point on the diffuse reflection surface, that is, the position in the actual space corresponding to each pixel in the diffuse reflection image captured by the camera, and point p' is the point on the NLOS scene. The above formula (6) establishes the conversion relationship between the brightness (or RGB information) of the point on the image on the NLOS scene to be restored to the brightness of the point on the diffuse reflection image captured by the camera.

如图1所示,在一个实例中,相机与NLOS场景之间存在一个隔板,导致相机无法直接获取NLOS场景的图像,但相机能够捕获NLOS场景在漫反射面上产生的投影图像。因NLOS场景的投影信息主要在漫反射面上靠近NLOS场景一侧,故为使相机捕获更多来自NLOS场景的信息,应使相机向NLOS场景一侧偏移一定的角度,如图2所示。由于相机偏转了一定的角度,故相机所拍摄的图像具有一定的景深信息,即相机的FOV并非一个矩形区域。为了便于计算,可以提取FOV内的一个矩形区域,该区域需要尽可能多的包含来自NLOS场景的信息,故该区域需要尽可能靠近NLOS场景一侧。假设该区域在漫反射面上已经进行了标注(即前期已经标定过),那么可以利用逆透视变换提取出该区域的信息并变换为一幅矩形图像,该矩形图像如图6所示。As shown in Figure 1, in one instance, there is a partition between the camera and the NLOS scene, so that the camera cannot directly acquire the image of the NLOS scene, but the camera can capture the projected image generated by the NLOS scene on the diffuse surface. Since the projection information of the NLOS scene is mainly on the diffuse reflection surface close to the side of the NLOS scene, in order for the camera to capture more information from the NLOS scene, the camera should be shifted to the side of the NLOS scene by a certain angle, as shown in Figure 2. . Since the camera is deflected by a certain angle, the image captured by the camera has certain depth of field information, that is, the FOV of the camera is not a rectangular area. In order to facilitate the calculation, a rectangular area in the FOV can be extracted. This area needs to contain as much information from the NLOS scene as possible, so the area needs to be as close to the side of the NLOS scene as possible. Assuming that the area has been marked on the diffuse reflection surface (that is, it has been marked in the early stage), the information of the area can be extracted by inverse perspective transformation and transformed into a rectangular image, as shown in Figure 6.

假设提取出的部分的是一幅分辨率为(m,n)的图像,实际计算过程中,如果(m,n)的数值较大,可以使用降采样技术将图像的分辨率一定程度的降低。假设待恢复的NLOS场景图像的分辨率为(x,y),如图6所示,这个分辨率可以根据需要在计算中自行设定,但如果太大会影响计算速度,而实际计算中x和y的数量级都在10~100之间,否则会大幅影响计算速度。Assuming that the extracted part is an image with a resolution of (m,n), in the actual calculation process, if the value of (m,n) is large, the downsampling technique can be used to reduce the resolution of the image to a certain extent . Assume that the resolution of the NLOS scene image to be restored is (x, y), as shown in Figure 6, this resolution can be set by itself in the calculation as needed, but if it is too large, it will affect the calculation speed, and in the actual calculation x and The magnitude of y is between 10 and 100, otherwise the calculation speed will be greatly affected.

对于漫反射图像上的任意一颗像素点pi(i∈[1,m*n])的亮度值(或RGB值),都来自NLOS场景图像中所有像素点p′j(j∈[1,x*y])的亮度值,利用公式(6)可以表示为:For any pixel p i (i∈[1, m*n]) on the diffuse reflection image, the luminance value (or RGB value) comes from all the pixels p′ j (j∈[1 , x*y]), which can be expressed as:

Figure BDA0002388061830000061
Figure BDA0002388061830000061

而两幅图像在数学上可以视为两个矩阵,如果两个矩阵之间存在转换关系,那么就可以使用一个转换矩阵来描述这种转换关系。因此,定义一个转换矩阵T来描述两幅图像之间转换关系。由上述公式可知,转换矩阵T的每一个元素应该描述pi到p′j的亮度转换关系,其中i∈[1,m*n],j∈[1,x*y],那么转换矩阵的维度应该为[m*n,x*y],这个矩阵的维度非常大,所以对于漫反射图像需要降采样,而且待恢复的NLOS场景图的分辨率也不能设置的过大,否则计算时间将非常长。综上转换矩阵T应该如下式:The two images can be regarded as two matrices mathematically. If there is a transformation relationship between the two matrices, then a transformation matrix can be used to describe the transformation relationship. Therefore, a transformation matrix T is defined to describe the transformation relationship between the two images. It can be seen from the above formula that each element of the conversion matrix T should describe the luminance conversion relationship from p i to p′ j , where i∈[1, m*n], j∈[1, x*y], then the The dimension should be [m*n, x*y]. The dimension of this matrix is very large, so the diffuse reflection image needs to be downsampled, and the resolution of the NLOS scene graph to be restored cannot be set too large, otherwise the calculation time will be reduced. very long. In summary, the transformation matrix T should be as follows:

Figure BDA0002388061830000062
Figure BDA0002388061830000062

建立转换矩阵后,漫反射图像和NLOS场景图像之间的转换关系可以由矩阵运算来表示,假设漫反射图像为矩阵d(计算时需要将d的维度转换为[m*n,1]),NLOS场景图像为矩阵f(计算时需要将f的维度转换为[x*y,1]),则关系如下式所示,After the conversion matrix is established, the conversion relationship between the diffuse reflection image and the NLOS scene image can be represented by matrix operations, assuming that the diffuse reflection image is a matrix d (the dimension of d needs to be converted to [m*n, 1] during calculation), The NLOS scene image is a matrix f (the dimension of f needs to be converted to [x*y, 1] during calculation), then the relationship is shown in the following formula:

d=Tf (9)d=Tf (9)

但在实际运算过程中,由于能获取的信息为漫反射图像d,需要求解的是NLOS场景图像f,所以上述公式必须进行反转,但这需要分成两种情况来讨论:However, in the actual operation process, since the information that can be obtained is the diffuse reflection image d, and the NLOS scene image f needs to be solved, the above formula must be reversed, but this needs to be discussed in two cases:

(1)当转换矩阵可逆时,根据式10求解得到NLOS场景图像对应的矩阵f:(1) When the transformation matrix is invertible, the matrix f corresponding to the NLOS scene image is obtained by solving according to Equation 10:

f=T-1d (10)f = T -1 d (10)

(2)当转换矩阵不可逆时,求解NLOS场景图像对应的矩阵f时,需要建立优化问题进行求解。此时,可以先求转换矩阵的伪逆

Figure BDA0002388061830000063
Figure BDA0002388061830000064
为f的最小二乘逼近。另外还需建立优化问题进行求解f:(2) When the transformation matrix is irreversible, an optimization problem needs to be established to solve the matrix f corresponding to the NLOS scene image. At this point, you can first find the pseudo-inverse of the transformation matrix
Figure BDA0002388061830000063
but
Figure BDA0002388061830000064
is the least squares approximation of f. In addition, an optimization problem needs to be established to solve f:

f=argmin(||Tf-d||2) (11)f=argmin(||Tf-d|| 2 ) (11)

但上述优化问题对于f的收敛条件限制的仍然太少,参考相关《凸优化》中正则化章节,借助图像中普遍存在的总变差最小,从而引入总变差正则化对优化问题的收敛进一步限制,此外,由于f是图像,为进一步加速收敛,需要将f限制在正数范围内,这里可以加入一个桶形函数将其限制在0~10000之间:However, the above optimization problem still has too few restrictions on the convergence conditions of f. Referring to the regularization chapter in the relevant "Convex Optimization", with the help of the minimum total variation ubiquitous in the image, the total variation regularization is introduced to further converge on the optimization problem. In addition, since f is an image, in order to further accelerate the convergence, it is necessary to limit f to a positive range. Here, a barrel function can be added to limit it between 0 and 10000:

f=argmin(||Tf-d||21||f||TV2B) (12)f=argmin(||Tf-d|| 21 ||f|| TV2 B) (12)

其中||f||TV为f的总变差,其计算方式为:where ||f|| TV is the total variation of f, which is calculated as:

Figure BDA0002388061830000071
Figure BDA0002388061830000071

B为桶形函数,其计算方式为:B is a barrel function, which is calculated as:

Figure BDA0002388061830000072
Figure BDA0002388061830000072

本实施例具体操作步骤包括:The specific operation steps of this embodiment include:

步骤1:由相机获取漫反射面上反射的来自NLOS场景光线所产生的漫射图像;具体的,相机拍摄漫反射图像时,为获取更多漫反射信息,视角会向NLOS一侧具有一定偏转角度,所拍摄的图像具有一定的景深信息,拍摄的矩形图像对应实际空间中不是一块矩形区域,后续处理过程中需要提取图像中对应实际场景为矩形区域的部分,并要求该部分尽可能多的包含漫反射信息。在实际应用时,相机拍摄的漫反射图像可能是单通道灰度图像、RGB三通道彩色图像、RGBG拜尔滤波模式四通道彩色图像的其中一种,取决于拍摄时所用相机类型,不同类型的图像在算法处理中会存在差异;Step 1: The camera obtains the diffuse image generated by the light from the NLOS scene reflected on the diffuse reflection surface; specifically, when the camera captures the diffuse reflection image, in order to obtain more diffuse reflection information, the viewing angle will have a certain deflection to the NLOS side Angle, the captured image has a certain depth of field information, and the captured rectangular image corresponds to not a rectangular area in the actual space. Contains diffuse information. In practical applications, the diffuse reflection image captured by the camera may be one of single-channel grayscale image, RGB three-channel color image, and RGBG Bayer filter mode four-channel color image. There will be differences in the algorithm processing of images;

步骤2:使用逆透视变换以及重采样技术提取图像中对应至实际场景为矩形区域的部分;逆透视变换使用的参考点可以来自图像中已经存在的实际场景中为矩形的四角,也可以是事先在漫反射面上进行标定的矩形四角顶点,逆透视变换可以采用OpenCV中的相关API,也可以采用图像中ROI区域提取再进行插值,从而变换成一张矩形图像;Step 2: Use inverse perspective transformation and resampling technology to extract the part of the image that corresponds to the rectangular area of the actual scene; the reference points used in the inverse perspective transformation can come from the four corners of the rectangle in the actual scene that already exists in the image, or can be obtained in advance. The inverse perspective transformation of the rectangular four-corner vertices calibrated on the diffuse surface can be transformed into a rectangular image by using the relevant API in OpenCV, or by extracting the ROI area in the image and then performing interpolation;

步骤3:确定重采样后的图像对应实际场景中的矩形图像的尺寸及空间位置,确定矩形图像的空间位置和尺寸后,需计算该区域内存在的像素点个数,从而确定每颗像素点中心对应位置的实际空间坐标,这里的坐标指的是相对坐标,具体的空间坐标系可根据计算时自行建立,但需要确保与后续NLOS场景确定位置和尺寸时使用的是相同的坐标系。此外,若拍摄的图像像素过高,为避免计算耗时过长,在不损失大部分精度的情况下,降低算法的计算量,需要先对图像进行降采样以降低图像的分辨率,再确定每颗像素点中心对应位置的实际空间坐标;Step 3: Determine the size and spatial position of the resampled image corresponding to the rectangular image in the actual scene. After determining the spatial position and size of the rectangular image, it is necessary to calculate the number of pixels existing in the area to determine each pixel. The actual spatial coordinates of the corresponding position of the center. The coordinates here refer to relative coordinates. The specific spatial coordinate system can be established according to the calculation, but it is necessary to ensure that the same coordinate system is used to determine the position and size of the subsequent NLOS scene. In addition, if the pixels of the captured image are too high, in order to avoid taking too long to calculate, and to reduce the calculation amount of the algorithm without losing most of the accuracy, it is necessary to downsample the image first to reduce the resolution of the image, and then determine The actual spatial coordinates of the corresponding position of the center of each pixel;

步骤4:确定待成像的NLOS场景所在矩形区域的尺寸及空间位置;实际NLOS场景可能是三维物体组成的场景,但在计算过程中可以假设NLOS场景是一副平面矩形图像,一方面可以简化计算过程,另一方面也不会影响实际成像的效果。对于待成像的NLOS场景所在矩形区域的尺寸及空间位置,在计算中可以对其作出一定的假设,也可以计算前完成标定。假设NLOS场景所在的矩形区域的尺寸和空间位置只会影响最终成像的场景的范围,不会影响成像场景的内容。在不确定场景的具体位置时,可以将该矩形的尺寸假设的较大一些,以包含需要重建的NLOS场景。如前所述,待成像的NLOS场景所在矩形区域的尺寸及空间位置与漫射图像对应实际场景中的矩形区域的尺寸及空间位置是在同一坐标系下确定的。Step 4: Determine the size and spatial position of the rectangular area where the NLOS scene to be imaged is located; the actual NLOS scene may be a scene composed of three-dimensional objects, but in the calculation process, it can be assumed that the NLOS scene is a flat rectangular image, which can simplify the calculation on the one hand The process, on the other hand, will not affect the actual imaging effect. For the size and spatial position of the rectangular area where the NLOS scene to be imaged is located, certain assumptions can be made in the calculation, or the calibration can be completed before the calculation. It is assumed that the size and spatial position of the rectangular area where the NLOS scene is located will only affect the range of the final imaged scene, but will not affect the content of the imaged scene. When the specific location of the scene is uncertain, the size of the rectangle can be assumed to be larger to contain the NLOS scene that needs to be reconstructed. As mentioned above, the size and spatial position of the rectangular area where the NLOS scene to be imaged is located and the size and spatial position of the rectangular area in the actual scene corresponding to the diffuse image are determined in the same coordinate system.

步骤5:由漫射图像和NLOS场景分别对应的实际矩形区域的尺寸及空间位置建立光传输矩阵;光传输矩阵的维度取决于矩形图像的总像素数和待恢复的NLOS场景的总像素数,实际行数为漫反射图像的总像素数,列数为待恢复的NLOS场景的总像素数。光传输矩阵的每个元素代表NLOS场景图像中的一颗像素信息到漫反射图像的一颗像素信息的转换关系,由光线跟踪算法中光辐射传播公式进行计算。Step 5: Establish a light transmission matrix from the size and spatial position of the actual rectangular area corresponding to the diffuse image and the NLOS scene respectively; the dimension of the light transmission matrix depends on the total number of pixels of the rectangular image and the total number of pixels of the NLOS scene to be restored, The actual number of rows is the total number of pixels of the diffuse image, and the number of columns is the total number of pixels of the NLOS scene to be restored. Each element of the light transmission matrix represents the conversion relationship of one pixel information in the NLOS scene image to one pixel information in the diffuse reflection image, which is calculated by the light radiation propagation formula in the ray tracing algorithm.

步骤6:由重采样后的漫射图像和光传输矩阵建立优化问题并求解。优化问题的建立与所拍摄的漫反射图像类型有关,若拍摄的图像为单通道灰度图像,则只需建立一个优化问题;如果拍摄的图像为RGB三通道彩色图像,则需要将三通道的数据分离,并对三个通道分别建立优化问题;若拍摄的图像为拜尔滤波模式的RGBG四通道彩色图像,需要将四通道的数据分离,并对两个G通道的数据进行平均得到三个通道的数据,最后针对三个通道建立优化问题。针对彩色图像建立优化问题求解后,需要将求解出的三通道数据进行组合,从而得到求解出的彩色图像。Step 6: Establish and solve an optimization problem from the resampled diffuse image and the light transmission matrix. The establishment of the optimization problem is related to the type of diffuse reflection image captured. If the captured image is a single-channel grayscale image, only one optimization problem needs to be established; if the captured image is an RGB three-channel color image, the three-channel Separate the data, and establish an optimization problem for the three channels; if the captured image is an RGBG four-channel color image in Bayer filter mode, it is necessary to separate the four-channel data, and average the data of the two G channels to obtain three The data of the channel, and finally the optimization problem is established for the three channels. After the optimization problem is established for the color image, it is necessary to combine the solved three-channel data to obtain the solved color image.

Claims (7)

1. A non-visual field imaging method based on ray tracing algorithm is characterized in that: the method comprises the following steps:
step 1: acquiring a diffuse reflection image generated by an NLOS scene on a diffuse reflection surface by using a camera;
step 2: extracting an image in a rectangular area in a corresponding actual space in the diffuse reflection image by adopting an inverse perspective transformation and resampling technology and transforming the partial image into a rectangular image with specified resolution;
and step 3: determining the size and the spatial position of the rectangular image obtained in the step (2), and obtaining the spatial coordinate of the center of each pixel point according to the number of the pixel points in the rectangular image;
and 4, step 4: determining the size and the spatial position of an area where an NLOS scene to be imaged is located;
and 5: establishing a light transmission matrix according to the size and the spatial position of the rectangular image obtained in the step 3 and the size and the spatial position of the region where the NLOS scene to be imaged is obtained in the step 4; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be imaged, the number of lines is the total pixel number of the diffuse reflection image obtained in the step 1, and the number of columns is the total pixel number of the NLOS scene to be imaged; each element in the light transmission matrix represents a conversion relation from one pixel information in an NLOS scene to be imaged to one pixel information of a diffuse reflection image, and the conversion relation is calculated by a light radiation propagation formula in a ray tracing algorithm;
step 6: and (4) establishing an optimization problem according to the rectangular image obtained in the step (2) and the optical transmission matrix obtained in the step (5), and solving the optimization problem to obtain an NLOS scene image.
2. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: before step 3 is performed, the rectangular image may be down-sampled according to actual requirements.
3. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: and the size and the spatial position of the region where the NLOS scene to be imaged is located and the size and the spatial position of the rectangular image are determined under the same coordinate system.
4. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: the conversion relation is expressed by a conversion matrix T:
Figure FDA0002388061820000011
in the formula, any pixel point p on the diffuse reflection imageiWherein i ∈ [1, m n]From any pixel point p 'on the NLOS scene image to be imaged'jWherein j ∈ [1, x y]ρ represents the reflectivity of the diffuse reflecting surface, θ' represents the angle between the incident ray and the normal of the NLOS scene surface, wiIs the direction vector of the incident ray, woIs the direction vector of the reflected light, Lo(p,wo) Representing the edge w at the point p of the diffuse reflection surfaceoBrightness of directionally reflected light, thetaiThe included angle between the incident light direction vector and the normal vector of the diffuse reflection material at the position p is included, and s is the area of the surface of the NLOS scene to be solved;
assuming that the diffuse reflection image is represented as an image matrix d and the NLOS scene to be imaged is represented as an image matrix f, the relationship is expressed as follows:
d=Tf (9)。
5. the ray tracing algorithm-based non-visual field imaging method according to claim 4, wherein: the step 6 specifically includes the following substeps:
judging whether the conversion matrix T is reversible, if so, calculating according to the formula (9) to obtain an image matrix f corresponding to the NLOS scene to be imaged;
f=T-1d (10)
if the current is not reversible, the following steps are carried out:
obtaining a pseudo-inverse of the transformation matrix
Figure FDA0002388061820000021
A least squares approximation of f;
establishing a corresponding optimization problem according to the type of the diffuse reflection image, and solving to obtain an image matrix f corresponding to the NLOS scene to be imaged:
f=argmin(||Tf-d||21||f||TV2B) (12)
wherein | f | purpleTVThe total variation of the image matrix f is calculated as follows:
Figure FDA0002388061820000023
b is a barrel function, and the calculation method is as follows:
Figure FDA0002388061820000024
wherein λ is12For regularizing coefficients, fi,jThe i rows and j columns of pixel values of the image matrix f, x and y respectively represent the total row number and the total column number of the image matrix f.
6. The ray tracing algorithm-based non-visual field imaging method according to claim 5, wherein: the diffuse reflection image comprises any one of a single-channel gray image, an RGB three-channel color image and an RGBG four-channel color image in a Bayer filtering mode.
7. The ray tracing algorithm-based non-visual field imaging method according to claim 6, wherein:
if the diffuse reflection image is a single-channel gray image, establishing an optimization problem, and solving to obtain a matrix f corresponding to the NLOS scene image; if the diffuse reflection image is an RGB three-channel color image, separating three-channel data, respectively establishing an optimization problem for the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image; and if the diffuse reflection image is an RGBG four-channel color image in a Bayer filtering mode, separating data of four channels, averaging data of two G channels to obtain data of three channels, establishing an optimization problem aiming at the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image.
CN202010104486.9A 2020-02-20 2020-02-20 A non-line-of-sight imaging method based on ray tracing algorithm Active CN111340929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010104486.9A CN111340929B (en) 2020-02-20 2020-02-20 A non-line-of-sight imaging method based on ray tracing algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010104486.9A CN111340929B (en) 2020-02-20 2020-02-20 A non-line-of-sight imaging method based on ray tracing algorithm

Publications (2)

Publication Number Publication Date
CN111340929A true CN111340929A (en) 2020-06-26
CN111340929B CN111340929B (en) 2022-11-25

Family

ID=71187819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010104486.9A Active CN111340929B (en) 2020-02-20 2020-02-20 A non-line-of-sight imaging method based on ray tracing algorithm

Country Status (1)

Country Link
CN (1) CN111340929B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784815A (en) * 2020-07-03 2020-10-16 哈尔滨工业大学 A passive non-field of view penumbra imaging method based on transmission window
CN113052833A (en) * 2021-04-20 2021-06-29 东南大学 Non-vision field imaging method based on infrared thermal radiation
CN113093389A (en) * 2021-04-15 2021-07-09 东南大学 Holographic waveguide display device based on non-visual field imaging and method thereof
CN113109787A (en) * 2021-04-15 2021-07-13 东南大学 Non-vision field imaging device and method based on thermal imaging camera
CN113109948A (en) * 2021-04-15 2021-07-13 东南大学 Polarization non-visual field imaging method based on diffuse reflection surface
CN113138027A (en) * 2021-05-07 2021-07-20 东南大学 Far infrared non-vision object positioning method based on bidirectional refractive index distribution function
CN113204010A (en) * 2021-03-15 2021-08-03 锋睿领创(珠海)科技有限公司 Non-visual field object detection method, device and storage medium
CN113344774A (en) * 2021-06-16 2021-09-03 东南大学 Non-visual field imaging method based on depth convolution inverse graph network
CN113411508A (en) * 2021-05-31 2021-09-17 东南大学 Non-vision field imaging method based on camera brightness measurement
WO2023279249A1 (en) * 2021-07-05 2023-01-12 Shanghaitech University Non-line-of-sight imaging via neural transient field

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198511A (en) * 2011-09-15 2013-07-10 佳能株式会社 Image processing apparatus and image processing method
JP2015114775A (en) * 2013-12-10 2015-06-22 キヤノン株式会社 Image processor and image processing method
US20190287294A1 (en) * 2018-03-17 2019-09-19 Nvidia Corporation Reflection denoising in ray-tracing applications

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198511A (en) * 2011-09-15 2013-07-10 佳能株式会社 Image processing apparatus and image processing method
JP2015114775A (en) * 2013-12-10 2015-06-22 キヤノン株式会社 Image processor and image processing method
US20190287294A1 (en) * 2018-03-17 2019-09-19 Nvidia Corporation Reflection denoising in ray-tracing applications

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784815A (en) * 2020-07-03 2020-10-16 哈尔滨工业大学 A passive non-field of view penumbra imaging method based on transmission window
CN111784815B (en) * 2020-07-03 2024-08-09 哈尔滨工业大学 Transmission window-based passive non-vision penumbra imaging method
CN113204010B (en) * 2021-03-15 2021-11-02 锋睿领创(珠海)科技有限公司 Non-visual field object detection method, device and storage medium
CN113204010A (en) * 2021-03-15 2021-08-03 锋睿领创(珠海)科技有限公司 Non-visual field object detection method, device and storage medium
CN113109787A (en) * 2021-04-15 2021-07-13 东南大学 Non-vision field imaging device and method based on thermal imaging camera
CN113109948A (en) * 2021-04-15 2021-07-13 东南大学 Polarization non-visual field imaging method based on diffuse reflection surface
CN113093389A (en) * 2021-04-15 2021-07-09 东南大学 Holographic waveguide display device based on non-visual field imaging and method thereof
CN113109787B (en) * 2021-04-15 2024-01-16 东南大学 Non-visual field imaging device and method based on thermal imaging camera
CN113052833A (en) * 2021-04-20 2021-06-29 东南大学 Non-vision field imaging method based on infrared thermal radiation
CN113138027A (en) * 2021-05-07 2021-07-20 东南大学 Far infrared non-vision object positioning method based on bidirectional refractive index distribution function
CN113411508A (en) * 2021-05-31 2021-09-17 东南大学 Non-vision field imaging method based on camera brightness measurement
CN113344774A (en) * 2021-06-16 2021-09-03 东南大学 Non-visual field imaging method based on depth convolution inverse graph network
WO2023279249A1 (en) * 2021-07-05 2023-01-12 Shanghaitech University Non-line-of-sight imaging via neural transient field

Also Published As

Publication number Publication date
CN111340929B (en) 2022-11-25

Similar Documents

Publication Publication Date Title
CN111340929B (en) A non-line-of-sight imaging method based on ray tracing algorithm
CN108307675B (en) Multi-baseline camera array system architecture for depth enhancement in VR/AR applications
US7792367B2 (en) System, method and apparatus for image processing and image format
US20210342578A1 (en) Polarization Imaging for Facial Recognition Enhancement System and Method
KR102674646B1 (en) Apparatus and method for obtaining distance information from a view
CN103363924B (en) A kind of three-dimensional computations ghost imaging system of compression and method
US9817159B2 (en) Structured light pattern generation
Schuon et al. High-quality scanning using time-of-flight depth superresolution
WO2021082264A1 (en) Projection image automatic correction method and system based on binocular vision
US20110292179A1 (en) Imaging system and method
JP6786225B2 (en) Image processing equipment, imaging equipment and image processing programs
JPWO2010004677A1 (en) Image processing method, image processing apparatus, image processing program, image composition method, and image composition apparatus
JPWO2007139067A1 (en) Image high resolution device, image high resolution method, image high resolution program, and image high resolution system
Gruber et al. Image-space illumination for augmented reality in dynamic environments
US20220030183A1 (en) Infrared and non-infrared channel blender for depth mapping using structured light
JP2023553259A (en) dark flash normal camera
US10319105B2 (en) Method and system for calibrating an image acquisition device and corresponding computer program product
US20230245277A1 (en) Image restoration method and device
US20220028102A1 (en) Devices and methods for determining confidence in stereo matching using a classifier-based filter
US11212503B1 (en) Dual camera HMD with remote camera alignment
Feng et al. Specular highlight removal and depth estimation based on polarization characteristics of light field
US11676293B2 (en) Methods for depth sensing using candidate images selected based on an epipolar line
JP7527532B1 (en) IMAGE POINT CLOUD DATA PROCESSING APPARATUS, IMAGE POINT CLOUD DATA PROCESSING METHOD, AND IMAGE POINT CLOUD DATA PROCESSING PROGRAM
US11004222B1 (en) High speed computational tracking sensor
Godber The development of novel stereoscopic imaging sensors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant