CN111340929B - Non-vision field imaging method based on ray tracing algorithm - Google Patents

Non-vision field imaging method based on ray tracing algorithm Download PDF

Info

Publication number
CN111340929B
CN111340929B CN202010104486.9A CN202010104486A CN111340929B CN 111340929 B CN111340929 B CN 111340929B CN 202010104486 A CN202010104486 A CN 202010104486A CN 111340929 B CN111340929 B CN 111340929B
Authority
CN
China
Prior art keywords
image
diffuse reflection
matrix
scene
nlos scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010104486.9A
Other languages
Chinese (zh)
Other versions
CN111340929A (en
Inventor
张宇宁
吴术孔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010104486.9A priority Critical patent/CN111340929B/en
Publication of CN111340929A publication Critical patent/CN111340929A/en
Application granted granted Critical
Publication of CN111340929B publication Critical patent/CN111340929B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/06Ray-tracing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Pure & Applied Mathematics (AREA)
  • Algebra (AREA)
  • Computational Mathematics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Graphics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a non-visual field imaging method based on a ray tracing algorithm, which comprises the steps of utilizing a common camera to shoot diffuse reflection information generated on a diffuse reflection surface of a non-visual field scene, analyzing a path of rays corresponding to each pixel in a diffuse reflection image from an NLOS scene to the camera through the algorithm by means of the thought of the ray tracing algorithm, carrying out reverse calculation on the propagation path, and combining a Bidirectional Reflectance Distribution Function (BRDF) of the diffuse reflection surface so as to reversely deduce an image of an NLOS scene object. The method does not need other expensive equipment and instruments, and greatly reduces the equipment cost compared with a non-visual field imaging method which needs to emit laser and complex imaging equipment.

Description

Non-vision field imaging method based on ray tracing algorithm
Technical Field
The invention discloses a non-vision field imaging method based on a ray tracing algorithm, and belongs to the field of computational imaging.
Background
With the increasing increase of imaging devices, imaging modes are enriched, and since the emergence of non-visual field imaging methods with lasers and gate control-based ICCDs as imaging devices in 2009, non-visual field imaging begins to show application potential in military detection, dangerous scene detection and automatic driving-related automobile blind area detection. With continuous innovation of researchers, methods for non-visual field imaging are increasing, but most of the methods do not get rid of a laser as an active light source, and application of the non-visual field imaging outside a laboratory is limited to a great extent. Until the last two years passive non-field imaging began to emerge, which again burst the potential for detection applications.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems and the defects in the prior art, the invention provides a non-visual field imaging method based on a ray tracing algorithm, which does not use active light sources such as laser and the like, has low cost and can still be applied to the environment outside a laboratory.
The technical scheme is as follows: a ray tracing algorithm based non-field of view imaging method comprising the steps of:
step 1: acquiring a diffuse reflection image generated by an NLOS scene on a diffuse reflection surface by using a camera;
step 2: extracting an image in a rectangular area in a corresponding actual space in the diffuse reflection image by adopting an inverse perspective transformation and resampling technology and transforming the partial image into a rectangular image with specified resolution;
and step 3: determining the size and the spatial position of the rectangular image obtained in the step (2), and obtaining the spatial coordinate of the center of each pixel point according to the number of the pixel points in the rectangular image;
and 4, step 4: determining the size and the spatial position of an area where an NLOS scene to be imaged is located;
and 5: establishing a light transmission matrix according to the size and the spatial position of the rectangular image obtained in the step 3 and the size and the spatial position of the region where the NLOS scene to be imaged is obtained in the step 4; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be imaged, the number of lines is the total pixel number of the diffuse reflection image obtained in the step 1, and the number of columns is the total pixel number of the NLOS scene to be imaged; each element in the light transmission matrix represents a conversion relation from one pixel information in an NLOS scene to be imaged to one pixel information of a diffuse reflection image, and the conversion relation is calculated by a light radiation propagation formula in a ray tracing algorithm;
step 6: and (5) establishing an optimization problem according to the rectangular image obtained in the step (2) and the optical transmission matrix obtained in the step (5), and solving the optimization problem to obtain an NLOS scene image.
Further, before step 3 is executed, the rectangular image may be down-sampled according to actual requirements.
Further, the size and the spatial position of the region where the to-be-imaged NLOS scene is located and the size and the spatial position of the rectangular image are determined under the same coordinate system.
Further, the conversion relationship is represented by a conversion matrix T:
Figure BDA0002388061830000021
in the formula, any pixel point p on the diffuse reflection image i Wherein i ∈ [1, m × n ]]From any pixel point p 'on the NLOS scene image to be imaged' j Wherein j ∈ [1, x × y]And ρ representsThe reflectivity of the diffuse reflection surface, θ' represents the angle between the incident ray and the normal of the NLOS scene surface, w i Is the direction vector of the incident ray, w o Is the direction vector of the reflected light, L o (p,w o ) Representing the edge w at the point p of the diffuse reflection surface o Brightness of directionally reflected light, θ i The included angle between the incident light direction vector and the normal vector of the diffuse reflection material at the position p is included, and s is the area of the surface of the NLOS scene to be solved;
assuming that the diffuse reflection image is represented as an image matrix d and the NLOS scene to be imaged is represented as an image matrix f, the relationship is expressed as follows:
d=Tf (9)
further, the step 6 specifically includes the following sub-steps:
judging whether the conversion matrix T is reversible, and if so, calculating according to the formula (9) to obtain an image matrix f corresponding to the NLOS scene to be imaged;
f=T -1 d (10)
if the current is not reversible, the following steps are carried out:
obtaining a pseudo-inverse of the transformation matrix
Figure BDA0002388061830000025
Figure BDA0002388061830000024
A least squares approximation of f;
establishing a corresponding optimization problem according to the type of the diffuse reflection image, and solving to obtain an image matrix f corresponding to the NLOS scene to be imaged:
f=arg min(||Tf-d|| 21 ||f|| TV2 B) (12)
wherein | f | purple TV The total variation of the image matrix f is calculated as follows:
Figure BDA0002388061830000022
b is a barrel function, and the calculation method is as follows:
Figure BDA0002388061830000023
wherein λ is 12 For regularizing coefficients, f i,j The pixel values of i rows and j columns of the image matrix f, x and y respectively represent the total row number and the total column number of the image matrix f.
Further, the diffuse reflection image comprises any one of a single-channel gray-scale image, an RGB three-channel color image and an RGBG four-channel color image in a Bayer filtering mode;
if the diffuse reflection image is a single-channel gray image, establishing an optimization problem, and solving to obtain a matrix f corresponding to the NLOS scene image; if the diffuse reflection image is an RGB three-channel color image, separating the data of three channels, respectively establishing an optimization problem for the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image; and if the diffuse reflection image is an RGBG four-channel color image in a Bayer filtering mode, separating data of four channels, averaging data of two G channels to obtain data of three channels, establishing an optimization problem aiming at the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image.
Has the advantages that: compared with the traditional active non-visual field imaging method, the non-visual field imaging method based on the ray tracing algorithm does not need an expensive active light source of laser and a specific image acquisition device, and meanwhile, compared with a passive imaging mode based on polarization information, the non-visual field imaging method based on the ray tracing algorithm needs an acquisition device of polarization information and a passive imaging mode based on coherence information needs an interferometer to acquire coherence information.
Drawings
FIG. 1 is a side view of a non-viewing scene according to an embodiment of the invention;
FIG. 2 is a top view of a non-viewing scene according to an embodiment of the invention;
FIG. 3 is a diagram illustrating brightness propagation calculations;
FIG. 4 is a schematic diagram of a diffuse reflective surface receiving incident light from a hemispherical space;
FIG. 5 is a schematic diagram of a diffuse reflecting surface receiving light from an NLOS scene;
FIG. 6 is a schematic diagram of an inverse perspective transformation region within the camera FOV;
fig. 7 is a schematic diagram of a diffuse reflection image matrix and an NLOS scene image matrix.
Detailed Description
The technical solution of the present invention will be further explained with reference to the accompanying drawings and examples.
The basic idea of the invention is as follows: the invention relates to a method for reconstructing a non-visual field scene by using diffuse reflection intensity information, which is characterized in that expensive imaging equipment is not needed, only a common camera is used for recording the intensity information (gray scale or RGB information) of the non-visual field scene reflected on a diffuse reflection surface, and then the backward propagation process of light corresponding to each pixel in a diffuse reflection image is deduced by using the principle of light propagation in a light tracking algorithm, so that the image of the non-visual field scene is reconstructed.
Therefore, in the present embodiment, by using the projection generated by the non-visual field scene on the diffuse reflection surface in front of the non-visual field scene, the projection information has lost the specific information of the original scene due to the anisotropic scattering of the scene light source and the irregular reflection of the diffuse reflection surface, but still includes a part of the information of the non-visual field scene, and by using the information and combining the simulated ray propagation path in the ray tracing algorithm and the bidirectional reflectance distribution function of the diffuse reflection surface, it is still possible to restore the original image of the non-visual field scene.
In the ray tracing algorithm, the reflected rays entering the camera from the diffuse reflecting surface can be described by the computer graphics equation (1):
Figure BDA0002388061830000041
in which p isA point on the diffuse reflective material, p' is a point on another object in space, w i Is the direction vector of the incident ray, w o Is the direction vector of the reflected light, L o (p,w o ) Representing the edge w at the point p of the diffuse reflection surface o Brightness of directionally reflected light, f r (p,w i ,w o ) The direction of incident light of the diffuse reflection material at the point p is w i The direction of the reflected light is w o Value of the Bidirectional Reflectance Distribution Function (BRDF), L i (p′,-w i ) Is emitted (or reflected) from point p' and follows w i Brightness of the directionally propagating light, θ i Is the angle between the incident light direction vector and the normal vector of the diffuse reflection material at p, dw i Integration over a 2 pi solid angle represents the brightness of all incident rays from the entire hemispherical space for incident light at the p point, as shown in fig. 4.
According to the definition of solid angle:
Figure BDA0002388061830000042
the integral of the above equation (1) for the solid angle can be converted into an area integral:
Figure BDA0002388061830000043
in the formula, θ' represents the angle between the incident ray and the normal of the NLOS scene surface. For the scene shown in fig. 1, it can be assumed that the ambient light incident on the diffuse reflective surface is only light from the NLOS scene, and other reflected light is ignored. This is because if only the objects in the NLOS scene are illuminated, the other reflected light is much weaker than the light directly from the objects in the NLOS scene, so the other reflected light can be ignored while ensuring the calculation results, which is actually based on the importance sampling theorem in the ray tracing algorithm. So if only the incident rays directly from the NLOS scene are considered, as shown in fig. 5. The range of the area integration in equation (3) above is the area of the NLOS scene. In practice, when reconstructing an NLOS scene image, the NLOS scene is assumed to be a two-dimensional rectangular image, so the integration region of formula (3) finally becomes a rectangular region. For ease of calculation and understanding, it can be assumed that the NLOS scene is the image displayed on a display screen, and that the task of non-field-of-view imaging is to reconstruct this image using diffuse reflectance information.
Integration is not convenient in actual programmed calculations, and in addition, because the ray tracing algorithm operates to discretize each ray. Thus, the area integral shown in equation (3) is converted into a summation calculation by the two-dimensional monte carlo integral theorem in calculus:
Figure BDA0002388061830000051
in the ray tracing algorithm, the bi-directional reflectance distribution function of an ideal diffuse surface is a constant independent of both incident and reflected light and surface points:
Figure BDA0002388061830000052
where p represents the reflectivity of the diffusive reflective surface, thus equation (4) can be converted to a more compact form:
Figure BDA0002388061830000053
for formula (6), p is a point on the diffuse reflection surface, that is, a position in the actual space corresponding to each pixel point in the diffuse reflection image captured by the camera, and p' is a point on the NLOS scene. Equation (6) above establishes a conversion relationship between the luminance (or RGB information) of a point on an image on the NLOS scene to be restored to the luminance of a point on the diffuse reflection image captured by the camera.
As shown in fig. 1, in one example, there is a barrier between the camera and the NLOS scene, resulting in the camera not being able to directly capture images of the NLOS scene, but the camera is able to capture projected images of the NLOS scene generated on the diffuse reflective surface. Since the projection information of the NLOS scene is mainly on the diffuse reflection surface near the NLOS scene, the camera should be shifted to the NLOS scene by a certain angle in order to capture more information from the NLOS scene, as shown in fig. 2. Because the camera is deflected by a certain angle, the image taken by the camera has certain depth information, i.e. the FOV of the camera is not a rectangular area. For ease of computation, a rectangular region within the FOV can be extracted that needs to contain as much information as possible from the NLOS scene, so the region needs to be as close to one side of the NLOS scene as possible. Assuming that the region is marked on the diffuse reflection surface (i.e. it is marked earlier), the information of the region can be extracted by inverse perspective transformation and transformed into a rectangular image, which is shown in fig. 6.
Assuming that the extracted part is an image with a resolution (m, n), in the actual calculation process, if the value of (m, n) is larger, the resolution of the image may be reduced to some extent by using a down-sampling technique. Assuming that the resolution of the NLOS scene image to be restored is (x, y), as shown in fig. 6, this resolution can be set in the calculation according to the need, but if it is too large, it will affect the calculation speed, while in the actual calculation, x and y are both in the order of magnitude of 10-100, otherwise it will greatly affect the calculation speed.
For any pixel point p on the diffuse reflection image i (i∈[1,m*n]) All come from all pixel points p 'in the NLOS scene image' j (j∈[1,x*y]) Can be expressed as follows using equation (6):
Figure BDA0002388061830000061
while two images can be mathematically viewed as two matrices, if there is a transformation relationship between the two matrices, then one transformation matrix can be used to describe the transformation relationship. Therefore, a transformation matrix T is defined to describe the transformation relationship between the two images. From the above formula, the matrix T is transformedEach element shall describe p i To p' j In which i ∈ [1, m × n],j∈[1,x*y]Then the dimensions of the transformation matrix should be [ m n, x y [ ]]The dimension of this matrix is very large, so down-sampling is required for the diffuse reflection image, and the resolution of the NLOS scene graph to be restored cannot be set too large, otherwise the computation time would be very long. The transformation matrix T should be given as follows:
Figure BDA0002388061830000062
after the transformation matrix is established, the transformation relationship between the diffuse reflection image and the NLOS scene image can be represented by matrix operation, assuming that the diffuse reflection image is a matrix d (d dimension needs to be transformed into [ m × n,1] during calculation) and the NLOS scene image is a matrix f (f dimension needs to be transformed into [ x × y,1] during calculation), the relationship is shown as the following formula,
d=Tf (9)
however, in the actual operation process, since the information that can be acquired is the diffuse reflection image d and the NLOS scene image f needs to be solved, the above formula must be inverted, but this needs to be discussed in two cases:
(1) When the conversion matrix is invertible, solving according to the formula 10 to obtain a matrix f corresponding to the NLOS scene image:
f=T -1 d (10)
(2) When the conversion matrix is irreversible, and the matrix f corresponding to the NLOS scene image is solved, an optimization problem needs to be established for solving. In this case, the pseudo-inverse of the transformation matrix may be obtained first
Figure BDA0002388061830000063
Then the
Figure BDA0002388061830000064
Is a least squares approximation of f. In addition, an optimization problem needs to be established to solve f:
f=argmin(||Tf-d|| 2 ) (11)
however, the above optimization problem still has too little restriction on the convergence condition of f, and referring to the regularization section in the related convex optimization, the convergence of the optimization problem is further restricted by introducing total variation regularization to the total variation with the help of the minimum total variation commonly existing in the image, and in addition, since f is the image, in order to further accelerate convergence, f needs to be restricted within a positive number range, and a barrel function can be added to restrict f to be between 0 and 10000:
f=argmin(||Tf-d|| 21 ||f|| TV2 B) (12)
wherein | | f | non-calculation TV The total variation of f is calculated as:
Figure BDA0002388061830000071
b is a barrel function, and the calculation method is as follows:
Figure BDA0002388061830000072
the specific operation steps of the embodiment include:
step 1: acquiring a diffuse image generated by light rays from the NLOS scene reflected on the diffuse reflection surface by the camera; specifically, when a camera shoots a diffuse reflection image, in order to acquire more diffuse reflection information, a certain deflection angle is formed towards one side of an NLOS (non line of sight) through an angle of view, the shot image has certain depth of field information, the shot rectangular image corresponds to a rectangular area in an actual space, a part, corresponding to the rectangular area of an actual scene, in the image needs to be extracted in a subsequent processing process, and the part is required to contain diffuse reflection information as much as possible. In practical application, the diffuse reflection image shot by the camera may be one of a single-channel gray image, an RGB three-channel color image and an RGBG bayer filter mode four-channel color image, and different types of images may have differences in algorithm processing depending on the type of the camera used in shooting;
and 2, step: extracting a part of the image corresponding to a rectangular area of the actual scene by using an inverse perspective transformation and resampling technology; the reference points used by the inverse perspective transformation can be from four corners of a rectangle in an existing actual scene in the image, or vertexes of four corners of the rectangle calibrated on the diffuse reflection surface in advance, and the inverse perspective transformation can adopt a relevant API in OpenCV, or can adopt an ROI area in the image to extract and then perform interpolation, so that the image is transformed into a rectangular image;
and step 3: determining the size and the spatial position of the rectangular image in the actual scene corresponding to the resampled image, and after determining the spatial position and the size of the rectangular image, calculating the number of pixels in the region, thereby determining the actual spatial coordinate of the corresponding position of the center of each pixel, wherein the coordinate refers to a relative coordinate, and a specific spatial coordinate system can be automatically established according to the calculation, but needs to ensure that the coordinate system is the same as the coordinate system used when the position and the size are determined in the subsequent NLOS scene. In addition, if the pixels of the shot image are too high, in order to avoid the problem that the calculation time is too long, and reduce the calculation amount of the algorithm under the condition of not losing most precision, the image needs to be down-sampled first to reduce the resolution of the image, and then the actual space coordinate of the corresponding position of the center of each pixel point is determined;
and 4, step 4: determining the size and the spatial position of a rectangular region where an NLOS scene to be imaged is located; the actual NLOS scene may be a scene composed of three-dimensional objects, but in the calculation process, it may be assumed that the NLOS scene is a pair of planar rectangular images, which may simplify the calculation process and may not affect the actual imaging effect. For the size and the spatial position of the rectangular region where the NLOS scene to be imaged is located, certain assumptions can be made in the calculation, and calibration can also be completed before calculation. It is assumed that the size and spatial position of the rectangular region in which the NLOS scene is located only affect the range of the finally imaged scene, and do not affect the content of the imaged scene. When the specific position of the scene is not determined, the size of the rectangle can be assumed to be larger to contain the NLOS scene that needs to be reconstructed. As described above, the size and spatial position of the rectangular region in which the NLOS scene to be imaged is located and the size and spatial position of the rectangular region in the actual scene corresponding to the diffusion image are determined in the same coordinate system.
And 5: establishing an optical transmission matrix according to the size and the spatial position of an actual rectangular region respectively corresponding to the diffusion image and the NLOS scene; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be recovered, the actual line number is the total pixel number of the diffuse reflection image, and the column number is the total pixel number of the NLOS scene to be recovered. Each element of the light transmission matrix represents the conversion relation from one pixel information in the NLOS scene image to one pixel information in the diffuse reflection image, and the calculation is carried out by a light radiation propagation formula in a ray tracing algorithm.
And 6: and establishing an optimization problem by the resampled diffusion image and the light transmission matrix and solving the optimization problem. The establishment of the optimization problem is related to the type of the shot diffuse reflection image, and if the shot image is a single-channel gray image, only one optimization problem needs to be established; if the shot image is an RGB three-channel color image, three-channel data are required to be separated, and optimization problems are respectively established for the three channels; if the shot image is an RGBG four-channel color image in a Bayer filtering mode, the data of four channels need to be separated, the data of two G channels are averaged to obtain the data of three channels, and finally, an optimization problem is established for the three channels. After the optimization problem is solved for the color image, the solved three-channel data needs to be combined, so that the solved color image is obtained.

Claims (7)

1. A non-visual field imaging method based on ray tracing algorithm is characterized in that: the method comprises the following steps:
step 1: acquiring a diffuse reflection image generated by an NLOS scene on a diffuse reflection surface by using a camera;
step 2: extracting an image in a rectangular area in a corresponding actual space in the diffuse reflection image by adopting an inverse perspective transformation and resampling technology and transforming the partial image into a rectangular image with specified resolution;
and step 3: determining the size and the spatial position of the rectangular image obtained in the step (2), and obtaining the spatial coordinate of the center of each pixel point according to the number of the pixel points in the rectangular image;
and 4, step 4: determining the size and the spatial position of an area where an NLOS scene to be imaged is located;
and 5: establishing a light transmission matrix according to the size and the spatial position of the rectangular image obtained in the step 3 and the size and the spatial position of the region where the NLOS scene to be imaged is obtained in the step 4; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be imaged, the line number is the total pixel number of the diffuse reflection image obtained in the step 1, and the column number is the total pixel number of the NLOS scene to be imaged; each element in the light transmission matrix represents a conversion relation from one pixel information in an NLOS scene to be imaged to one pixel information of a diffuse reflection image, and the conversion relation is calculated by a light radiation propagation formula in a ray tracing algorithm;
step 6: and (5) establishing an optimization problem according to the rectangular image obtained in the step (2) and the optical transmission matrix obtained in the step (5), and solving the optimization problem to obtain an NLOS scene image.
2. A ray tracing algorithm based non-visual field imaging method according to claim 1, wherein: before step 3 is performed, the rectangular image may be down-sampled according to actual requirements.
3. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: and the size and the spatial position of the region where the NLOS scene to be imaged is located and the size and the spatial position of the rectangular image are determined under the same coordinate system.
4. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: the conversion relation is expressed by a conversion matrix T:
Figure FDA0002388061820000011
in the formula, any pixel on the diffuse reflection imagePoint p i Where i ∈ [1, m × n ]]From any pixel point p 'on the NLOS scene image to be imaged' j Where j ∈ [1, x y]ρ represents the reflectivity of the diffuse reflecting surface, θ' represents the angle between the incident ray and the normal of the NLOS scene surface, w i Is the direction vector, w, of the incident ray o Is the direction vector of the reflected light, L o (p,w o ) Representing the edge w at the point p of the diffuse reflection surface o Brightness of directionally reflected light, theta i The included angle between the incident light direction vector and the normal vector of the diffuse reflection material at the position p is included, and s is the area of the surface of the NLOS scene to be solved;
assuming that the diffuse reflection image is represented as an image matrix d and the NLOS scene to be imaged is represented as an image matrix f, the relationship is expressed as follows:
d=Tf (9)。
5. the ray tracing algorithm-based non-visual field imaging method according to claim 4, wherein: the step 6 specifically includes the following substeps:
judging whether the conversion matrix T is reversible, if so, calculating according to the formula (9) to obtain an image matrix f corresponding to the NLOS scene to be imaged;
f=T -1 d (10)
if the current is not reversible, the following steps are carried out:
obtaining a pseudo-inverse of the transformation matrix
Figure FDA0002388061820000021
Figure FDA0002388061820000022
A least squares approximation of f;
establishing a corresponding optimization problem according to the type of the diffuse reflection image, and solving to obtain an image matrix f corresponding to the NLOS scene to be imaged:
f=argmin(||Tf-d|| 21 ||f|| TV2 B) (12)
wherein,||f|| TV The total variation of the image matrix f is calculated as follows:
Figure FDA0002388061820000023
b is a barrel function, and the calculation method is as follows:
Figure FDA0002388061820000024
wherein λ is 12 For regularizing coefficients, f i,j The i rows and j columns of pixel values of the image matrix f, x and y respectively represent the total row number and the total column number of the image matrix f.
6. The ray tracing algorithm-based non-visual field imaging method according to claim 5, wherein: the diffuse reflection image comprises any one of a single-channel gray image, an RGB three-channel color image and an RGBG four-channel color image in a Bayer filtering mode.
7. The ray tracing algorithm-based non-visual field imaging method according to claim 6, wherein:
if the diffuse reflection image is a single-channel gray image, establishing an optimization problem, and solving to obtain a matrix f corresponding to the NLOS scene image; if the diffuse reflection image is an RGB three-channel color image, separating the data of three channels, respectively establishing an optimization problem for the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image; and if the diffuse reflection image is an RGBG four-channel color image in a Bayer filtering mode, separating data of four channels, averaging data of two G channels to obtain data of three channels, establishing an optimization problem aiming at the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image.
CN202010104486.9A 2020-02-20 2020-02-20 Non-vision field imaging method based on ray tracing algorithm Active CN111340929B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010104486.9A CN111340929B (en) 2020-02-20 2020-02-20 Non-vision field imaging method based on ray tracing algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010104486.9A CN111340929B (en) 2020-02-20 2020-02-20 Non-vision field imaging method based on ray tracing algorithm

Publications (2)

Publication Number Publication Date
CN111340929A CN111340929A (en) 2020-06-26
CN111340929B true CN111340929B (en) 2022-11-25

Family

ID=71187819

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010104486.9A Active CN111340929B (en) 2020-02-20 2020-02-20 Non-vision field imaging method based on ray tracing algorithm

Country Status (1)

Country Link
CN (1) CN111340929B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113204010B (en) * 2021-03-15 2021-11-02 锋睿领创(珠海)科技有限公司 Non-visual field object detection method, device and storage medium
CN113109787B (en) * 2021-04-15 2024-01-16 东南大学 Non-visual field imaging device and method based on thermal imaging camera
CN113109948A (en) * 2021-04-15 2021-07-13 东南大学 Polarization non-visual field imaging method based on diffuse reflection surface
CN113093389A (en) * 2021-04-15 2021-07-09 东南大学 Holographic waveguide display device based on non-visual field imaging and method thereof
CN113052833A (en) * 2021-04-20 2021-06-29 东南大学 Non-vision field imaging method based on infrared thermal radiation
CN113138027A (en) * 2021-05-07 2021-07-20 东南大学 Far infrared non-vision object positioning method based on bidirectional refractive index distribution function
CN113411508B (en) * 2021-05-31 2022-07-01 东南大学 Non-vision field imaging method based on camera brightness measurement
CN113344774A (en) * 2021-06-16 2021-09-03 东南大学 Non-visual field imaging method based on depth convolution inverse graph network
US20240202948A1 (en) * 2021-07-05 2024-06-20 Shanghaitech University Non-line-of-sight imaging via neural transient field

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198511A (en) * 2011-09-15 2013-07-10 佳能株式会社 Image processing apparatus and image processing method
JP2015114775A (en) * 2013-12-10 2015-06-22 キヤノン株式会社 Image processor and image processing method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10776985B2 (en) * 2018-03-17 2020-09-15 Nvidia Corporation Reflection denoising in ray-tracing applications

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103198511A (en) * 2011-09-15 2013-07-10 佳能株式会社 Image processing apparatus and image processing method
JP2015114775A (en) * 2013-12-10 2015-06-22 キヤノン株式会社 Image processor and image processing method

Also Published As

Publication number Publication date
CN111340929A (en) 2020-06-26

Similar Documents

Publication Publication Date Title
CN111340929B (en) Non-vision field imaging method based on ray tracing algorithm
KR102483838B1 (en) Multi-Baseline Camera Array System Architecture for Depth Augmentation in VR/AR Applications
US10893260B2 (en) Depth mapping with a head mounted display using stereo cameras and structured light
Baradad et al. Inferring light fields from shadows
Zhang et al. 3D single-pixel video
US7792367B2 (en) System, method and apparatus for image processing and image format
CN105790836B (en) Using the presumption of the surface properties of plenoptic camera
US10996752B1 (en) Infrared transparent backlight device for eye tracking applications
US20100103175A1 (en) Method for generating a high-resolution virtual-focal-plane image
US20100080453A1 (en) System for recovery of degraded images
US10013761B2 (en) Automatic orientation estimation of camera system relative to vehicle
WO2018235163A1 (en) Calibration device, calibration chart, chart pattern generation device, and calibration method
KR20170017586A (en) Method for assuming parameter of 3d display device and 3d display device thereof
EP3144894B1 (en) Method and system for calibrating an image acquisition device and corresponding computer program product
US10964107B2 (en) System for acquiring correspondence between light rays of transparent object
KR101478980B1 (en) System for multi channel display to use a fish-eye lens
Liu et al. Optical distortion correction considering radial and tangential distortion rates defined by optical design
US11601607B2 (en) Infrared and non-infrared channel blender for depth mapping using structured light
EP3009887B1 (en) Optical imaging processing system
US20100302403A1 (en) Generating Images With Different Fields Of View
US11756177B2 (en) Temporal filtering weight computation
EP3944182A1 (en) Image restoration method and device
US11004222B1 (en) High speed computational tracking sensor
US10712572B1 (en) Angle sensitive pixel array including a liquid crystal layer
WO2020084894A1 (en) Multi-camera system, control value calculation method and control device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant