CN111340929A - Non-vision field imaging method based on ray tracing algorithm - Google Patents
Non-vision field imaging method based on ray tracing algorithm Download PDFInfo
- Publication number
- CN111340929A CN111340929A CN202010104486.9A CN202010104486A CN111340929A CN 111340929 A CN111340929 A CN 111340929A CN 202010104486 A CN202010104486 A CN 202010104486A CN 111340929 A CN111340929 A CN 111340929A
- Authority
- CN
- China
- Prior art keywords
- image
- diffuse reflection
- matrix
- scene
- nlos scene
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/06—Ray-tracing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/003—Reconstruction from projections, e.g. tomography
- G06T11/006—Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Algebra (AREA)
- Computational Mathematics (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Graphics (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- General Engineering & Computer Science (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a non-visual field imaging method based on a ray tracing algorithm, which comprises the steps of utilizing a common camera to shoot diffuse reflection information generated on a diffuse reflection surface of a non-visual field scene, analyzing a path of rays corresponding to each pixel in a diffuse reflection image from an NLOS scene to the camera through the algorithm by means of the thought of the ray tracing algorithm, carrying out reverse calculation on the propagation path, and combining a Bidirectional Reflectance Distribution Function (BRDF) of the diffuse reflection surface so as to reversely deduce an image of an NLOS scene object. The method does not need other expensive equipment and instruments, and greatly reduces the equipment cost compared with a non-visual field imaging method which needs to emit laser and complex imaging equipment.
Description
Technical Field
The invention discloses a non-vision field imaging method based on a ray tracing algorithm, and belongs to the field of computational imaging.
Background
With the increasing increase of imaging devices, imaging modes are enriched, and since the emergence of non-visual field imaging methods with lasers and gate control-based ICCDs as imaging devices in 2009, non-visual field imaging begins to show application potential in military detection, dangerous scene detection and automatic driving-related automobile blind area detection. With continuous innovation of researchers, methods for non-visual field imaging are increasing, but most of the methods do not get rid of a laser as an active light source, and application of the non-visual field imaging outside a laboratory is limited to a great extent. Until the last two years passive non-field imaging began to emerge, which again burst the potential for detection applications.
Disclosure of Invention
The purpose of the invention is as follows: aiming at the problems and the defects in the prior art, the invention provides a non-visual field imaging method based on a ray tracing algorithm, which does not use active light sources such as laser and the like, has low cost and can still be applied to the environment outside a laboratory.
The technical scheme is as follows: a ray tracing algorithm based non-field of view imaging method comprising the steps of:
step 1: acquiring a diffuse reflection image generated by an NLOS scene on a diffuse reflection surface by using a camera;
step 2: extracting an image in a rectangular area in a corresponding actual space in the diffuse reflection image by adopting an inverse perspective transformation and resampling technology and transforming the partial image into a rectangular image with specified resolution;
and step 3: determining the size and the spatial position of the rectangular image obtained in the step (2), and obtaining the spatial coordinate of the center of each pixel point according to the number of the pixel points in the rectangular image;
and 4, step 4: determining the size and the spatial position of an area where an NLOS scene to be imaged is located;
and 5: establishing a light transmission matrix according to the size and the spatial position of the rectangular image obtained in the step 3 and the size and the spatial position of the region where the NLOS scene to be imaged is obtained in the step 4; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be imaged, the number of lines is the total pixel number of the diffuse reflection image obtained in the step 1, and the number of columns is the total pixel number of the NLOS scene to be imaged; each element in the light transmission matrix represents a conversion relation from one pixel information in an NLOS scene to be imaged to one pixel information of a diffuse reflection image, and the conversion relation is calculated by a light radiation propagation formula in a ray tracing algorithm;
step 6: and (4) establishing an optimization problem according to the rectangular image obtained in the step (2) and the optical transmission matrix obtained in the step (5), and solving the optimization problem to obtain an NLOS scene image.
Further, before step 3 is executed, the rectangular image may be down-sampled according to actual requirements.
Further, the size and the spatial position of the region where the to-be-imaged NLOS scene is located and the size and the spatial position of the rectangular image are determined under the same coordinate system.
Further, the conversion relationship is represented by a conversion matrix T:
in the formula, any pixel point p on the diffuse reflection imageiWherein i ∈ [1, m n]From any pixel point p 'on the NLOS scene image to be imaged'jWherein j ∈ [1, x y]ρ represents the reflectivity of the diffuse reflecting surface, θ' represents the angle between the incident ray and the normal of the NLOS scene surface, wiIs the direction vector of the incident ray, woIs the direction vector of the reflected light, Lo(p,wo) Representing the edge w at the point p of the diffuse reflection surfaceoBrightness of directionally reflected light, thetaiThe included angle between the incident light direction vector and the normal vector of the diffuse reflection material at the position p is included, and s is the area of the surface of the NLOS scene to be solved;
assuming that the diffuse reflection image is represented as an image matrix d and the NLOS scene to be imaged is represented as an image matrix f, the relationship is expressed as follows:
d=Tf (9)
further, the step 6 specifically includes the following sub-steps:
judging whether the conversion matrix T is reversible, if so, calculating according to the formula (9) to obtain an image matrix f corresponding to the NLOS scene to be imaged;
f=T-1d (10)
if the current is not reversible, the following steps are carried out:
establishing a corresponding optimization problem according to the type of the diffuse reflection image, and solving to obtain an image matrix f corresponding to the NLOS scene to be imaged:
f=arg min(||Tf-d||2+λ1||f||TV+λ2B) (12)
wherein | f | purpleTVThe total variation of the image matrix f is calculated as follows:
b is a barrel function, and the calculation method is as follows:
wherein λ is1,λ2For regularizing coefficients, fi,jThe i rows and j columns of pixel values of the image matrix f, x and y respectively represent the total row number and the total column number of the image matrix f.
Further, the diffuse reflection image comprises any one of a single-channel gray-scale image, an RGB three-channel color image and an RGBG four-channel color image in a Bayer filtering mode;
if the diffuse reflection image is a single-channel gray image, establishing an optimization problem, and solving to obtain a matrix f corresponding to the NLOS scene image; if the diffuse reflection image is an RGB three-channel color image, separating three-channel data, respectively establishing an optimization problem for the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image; and if the diffuse reflection image is an RGBG four-channel color image in a Bayer filtering mode, separating data of four channels, averaging data of two G channels to obtain data of three channels, establishing an optimization problem aiming at the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image.
Has the advantages that: compared with the traditional active non-visual field imaging method, the non-visual field imaging method based on the ray tracing algorithm does not need an expensive active light source of laser and a specific image acquisition device, and meanwhile, compared with a passive imaging mode based on polarization information, the non-visual field imaging method based on the ray tracing algorithm needs an acquisition device of polarization information and a passive imaging mode based on coherence information needs an interferometer to acquire coherence information.
Drawings
FIG. 1 is a side view of a non-viewing scene according to an embodiment of the invention;
FIG. 2 is a top view of a non-viewing scene according to an embodiment of the invention;
FIG. 3 is a diagram illustrating brightness propagation calculations;
FIG. 4 is a schematic diagram of a diffuse reflective surface receiving incident light from a hemispherical space;
FIG. 5 is a schematic diagram of a diffuse reflecting surface receiving light from an NLOS scene;
FIG. 6 is a schematic diagram of an inverse perspective transformation region within the camera FOV;
fig. 7 is a schematic diagram of a diffuse reflection image matrix and an NLOS scene image matrix.
Detailed Description
The technical solution of the present invention will be further explained with reference to the accompanying drawings and examples.
The basic idea of the invention is as follows: the invention relates to a method for reconstructing a non-visual field scene by using diffuse reflection intensity information, which is characterized in that expensive imaging equipment is not needed, only a common camera is used for recording the intensity information (gray scale or RGB information) of the non-visual field scene reflected on a diffuse reflection surface, and then the backward propagation process of light corresponding to each pixel in a diffuse reflection image is deduced by using the principle of light propagation in a light tracking algorithm, so that the image of the non-visual field scene is reconstructed.
Therefore, in the present embodiment, by using the projection generated by the non-visual field scene on the diffuse reflection surface in front of the non-visual field scene, the projection information has lost the specific information of the original scene due to the anisotropic scattering of the scene light source and the irregular reflection of the diffuse reflection surface, but still includes a part of the information of the non-visual field scene, and by using the information and combining the simulated ray propagation path in the ray tracing algorithm and the bidirectional reflectance distribution function of the diffuse reflection surface, it is still possible to restore the original image of the non-visual field scene.
In a ray tracing algorithm, the reflected rays entering the camera from a diffuse reflecting surface can be described by the computer graphics equation (1):
where p is a point on the diffuse reflective material, p' is a point on another object in space, wiIs the direction vector of the incident ray, woIs the direction vector of the reflected light, Lo(p,wo) Representing the edge w at the point p of the diffuse reflection surfaceoBrightness of directionally reflected light, fr(p,wi,wo) The incident light direction of the diffuse reflection material at the point p is wiThe direction of the reflected light is woValue of the Bidirectional Reflectance Distribution Function (BRDF), Li(p′,-wi) Is emitted (or reflected) from point p' and follows wiBrightness of directionally propagating light, thetaiIs the angle between the incident light direction vector and the normal vector of the diffuse reflection material at p, dwiIntegration over a 2 pi solid angle represents the brightness of all incident rays from the entire hemispherical space for incident light at the p point, as shown in fig. 4.
According to the definition of solid angle:
the integral of the above equation (1) for the solid angle can be converted into an area integral:
in the formula, θ' represents the angle between the incident ray and the normal of the NLOS scene surface. For the scene shown in fig. 1, it can be assumed that the ambient light incident on the diffuse reflective surface is only light from the NLOS scene, and other reflected light is ignored. This is because if only the objects in the NLOS scene are illuminated, the other reflected light is much weaker than the light directly from the objects in the NLOS scene, so the other reflected light can be ignored while ensuring the calculation results, which is actually based on the importance sampling theorem in the ray tracing algorithm. So if only the incident rays directly from the NLOS scene are considered, as shown in fig. 5. The range of the area integral in equation (3) above is then the area of the NLOS scene. In practice, when reconstructing an NLOS scene image, the NLOS scene is assumed to be a two-dimensional rectangular image, so the integration region of formula (3) finally becomes a rectangular region. For ease of calculation and understanding, it can be assumed that the NLOS scene is the image displayed on a display screen, and that the task of non-field-of-view imaging is to reconstruct this image using diffuse reflectance information.
Integration is not convenient in actual programmed calculations, and in addition, because the ray tracing algorithm operates to discretize each ray. Thus, the area integral shown in equation (3) is converted into a summation calculation by the two-dimensional monte carlo integral theorem in calculus:
in the ray tracing algorithm, the bi-directional reflectance distribution function of an ideal diffuse surface is a constant independent of both incident and reflected light and surface points:
where p represents the reflectivity of the diffusive reflective surface, thus equation (4) can be converted to a more compact form:
for formula (6), the point p is a point on the diffuse reflection surface, that is, a position in the actual space corresponding to each pixel point in the diffuse reflection image shot by the camera, and the point p' is a point on the NLOS scene. Equation (6) above establishes a conversion relationship between the luminance (or RGB information) of a point on an image on the NLOS scene to be restored to the luminance of a point on the diffuse reflection image captured by the camera.
As shown in fig. 1, in one example, there is a barrier between the camera and the NLOS scene, resulting in the camera not being able to directly capture images of the NLOS scene, but the camera is able to capture projected images of the NLOS scene generated on the diffuse reflective surface. Since the projection information of the NLOS scene is mainly on the diffuse reflection surface near the NLOS scene, the camera should be shifted to the NLOS scene by a certain angle in order to capture more information from the NLOS scene, as shown in fig. 2. Because the camera is deflected by a certain angle, the image taken by the camera has certain depth information, i.e. the FOV of the camera is not a rectangular area. For ease of computation, a rectangular region within the FOV can be extracted that needs to contain as much information as possible from the NLOS scene, so the region needs to be as close to one side of the NLOS scene as possible. Assuming that the region is marked on the diffuse reflection surface (i.e. it is marked earlier), the information of the region can be extracted by inverse perspective transformation and transformed into a rectangular image, which is shown in fig. 6.
Assuming that the extracted part is an image with a resolution (m, n), in the actual calculation process, if the value of (m, n) is larger, the resolution of the image can be reduced to some extent by using a down-sampling technique. Assuming that the resolution of the NLOS scene image to be restored is (x, y), as shown in fig. 6, this resolution can be set by itself in the calculation according to the need, but if it is too large, it will affect the calculation speed, and in the actual calculation, both x and y are in the order of magnitude of 10-100, otherwise, it will greatly affect the calculation speed.
For any pixel point p on the diffuse reflection imagei(i∈[1,m*n]) All come from all pixel points p 'in the NLOS scene image'j(j∈[1,x*y]) Can be expressed as follows using equation (6):
while two images can be mathematically viewed as two matrices, if there is a transformation relationship between the two matrices, then one transformation matrix can be used to describe the transformation relationship. Therefore, a transformation matrix T is defined to describe the transformation relationship between the two images. From the above formula, each element of the transformation matrix T should describe piTo p'jThe luminance conversion relationship of (1), wherein i ∈ [1, m n%],j∈[1,x*y]Then the dimensions of the transformation matrix should be [ m n, x y [ ]]The dimension of this matrix is very large, so down-sampling is required for the diffuse reflection image, and the resolution of the NLOS scene graph to be restored cannot be set too large, otherwise the computation time would be very long. The transformation matrix T should be as follows:
after the transformation matrix is established, the transformation relationship between the diffuse reflection image and the NLOS scene image can be represented by matrix operation, assuming that the diffuse reflection image is a matrix d (d dimension needs to be transformed into [ m × n, 1] during calculation) and the NLOS scene image is a matrix f (f dimension needs to be transformed into [ x × y, 1] during calculation), the relationship is shown as the following formula,
d=Tf (9)
however, in the actual operation process, since the information that can be acquired is the diffuse reflection image d and the NLOS scene image f needs to be solved, the above formula must be inverted, but this needs to be discussed in two cases:
(1) when the conversion matrix is reversible, solving according to the formula 10 to obtain a matrix f corresponding to the NLOS scene image:
f=T-1d (10)
(2) when the transformation matrix is irreversible, an optimization problem needs to be established for solving when the matrix f corresponding to the NLOS scene image is solved. At this time, the pseudo-inverse of the transformation matrix may be obtained firstThenIs a least squares approximation of f. In addition, an optimization problem needs to be established to solve f:
f=argmin(||Tf-d||2) (11)
however, the above optimization problem still has too little restriction on the convergence condition of f, and referring to the regularization section in the related convex optimization, the convergence of the optimization problem is further restricted by introducing total variation regularization to the total variation with the help of the minimum total variation commonly existing in the image, and in addition, since f is the image, in order to further accelerate convergence, f needs to be restricted within a positive number range, and a barrel function can be added to restrict f to be between 0 and 10000:
f=argmin(||Tf-d||2+λ1||f||TV+λ2B) (12)
wherein | | f | non-calculationTVThe total variation of f is calculated as:
b is a barrel function, and the calculation method is as follows:
the specific operation steps of the embodiment include:
step 1: acquiring a diffuse image generated by light rays from the NLOS scene reflected on the diffuse reflection surface by the camera; specifically, when the camera shoots the diffuse reflection image, in order to acquire more diffuse reflection information, the visual angle has a certain deflection angle towards one side of the NLOS, the shot image has certain depth-of-field information, the shot rectangular image is not a rectangular area in the corresponding actual space, a part of the image corresponding to the rectangular area of the actual scene needs to be extracted in the subsequent processing process, and the part is required to contain the diffuse reflection information as much as possible. In practical application, the diffuse reflection image shot by the camera may be one of a single-channel gray-scale image, an RGB three-channel color image and an RGBG bayer filter mode four-channel color image, and different types of images may have differences in algorithm processing depending on the type of the camera used in shooting;
step 2: extracting a part of the image corresponding to a rectangular area of an actual scene by using an inverse perspective transformation and a resampling technology; the reference points used by the inverse perspective transformation can be from four corners of a rectangle in an existing actual scene in the image, or vertices of four corners of the rectangle calibrated on the diffuse reflection surface in advance, and the inverse perspective transformation can adopt a relevant API in OpenCV, or can adopt an ROI region in the image to extract and then perform interpolation, so that the image is transformed into a rectangular image;
and step 3: determining the size and the spatial position of the rectangular image in the actual scene corresponding to the resampled image, and after determining the spatial position and the size of the rectangular image, calculating the number of pixels in the region, thereby determining the actual spatial coordinate of the corresponding position of the center of each pixel, wherein the coordinate refers to a relative coordinate, and a specific spatial coordinate system can be automatically established according to the calculation, but needs to ensure that the coordinate system is the same as the coordinate system used when the position and the size are determined in the subsequent NLOS scene. In addition, if the pixels of the shot image are too high, in order to avoid the problem that the calculation time is too long, and reduce the calculation amount of the algorithm under the condition of not losing most precision, the image needs to be down-sampled first to reduce the resolution of the image, and then the actual space coordinate of the corresponding position of the center of each pixel point is determined;
and 4, step 4: determining the size and the spatial position of a rectangular region where an NLOS scene to be imaged is located; the actual NLOS scene may be a scene composed of three-dimensional objects, but in the calculation process, it may be assumed that the NLOS scene is a pair of planar rectangular images, which may simplify the calculation process and may not affect the actual imaging effect. For the size and the spatial position of the rectangular region where the NLOS scene to be imaged is located, certain assumptions can be made in the calculation, and calibration can also be completed before calculation. It is assumed that the size and spatial position of the rectangular region in which the NLOS scene is located only affect the range of the finally imaged scene and do not affect the content of the imaged scene. When the specific position of the scene is not determined, the size of the rectangle can be assumed to be larger to contain the NLOS scene that needs to be reconstructed. As described above, the size and spatial position of the rectangular region in which the NLOS scene to be imaged is located and the size and spatial position of the rectangular region in the actual scene corresponding to the diffusion image are determined in the same coordinate system.
And 5: establishing an optical transmission matrix according to the size and the spatial position of an actual rectangular region respectively corresponding to the diffusion image and the NLOS scene; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be restored, the actual line number is the total pixel number of the diffuse reflection image, and the column number is the total pixel number of the NLOS scene to be restored. Each element of the light transmission matrix represents the conversion relation from one pixel information in the NLOS scene image to one pixel information in the diffuse reflection image, and the calculation is carried out by a light radiation propagation formula in a ray tracing algorithm.
Step 6: and establishing an optimization problem by the resampled diffusion image and the light transmission matrix and solving the optimization problem. The establishment of the optimization problem is related to the type of the shot diffuse reflection image, and if the shot image is a single-channel gray image, only one optimization problem needs to be established; if the shot image is an RGB three-channel color image, three-channel data are required to be separated, and optimization problems are respectively established for the three channels; if the shot image is an RGBG four-channel color image in a Bayer filtering mode, the data of four channels need to be separated, the data of two G channels are averaged to obtain the data of three channels, and finally an optimization problem is established for the three channels. After the optimization problem is solved for the color image, the solved three-channel data needs to be combined, so that the solved color image is obtained.
Claims (7)
1. A non-visual field imaging method based on ray tracing algorithm is characterized in that: the method comprises the following steps:
step 1: acquiring a diffuse reflection image generated by an NLOS scene on a diffuse reflection surface by using a camera;
step 2: extracting an image in a rectangular area in a corresponding actual space in the diffuse reflection image by adopting an inverse perspective transformation and resampling technology and transforming the partial image into a rectangular image with specified resolution;
and step 3: determining the size and the spatial position of the rectangular image obtained in the step (2), and obtaining the spatial coordinate of the center of each pixel point according to the number of the pixel points in the rectangular image;
and 4, step 4: determining the size and the spatial position of an area where an NLOS scene to be imaged is located;
and 5: establishing a light transmission matrix according to the size and the spatial position of the rectangular image obtained in the step 3 and the size and the spatial position of the region where the NLOS scene to be imaged is obtained in the step 4; the dimensionality of the light transmission matrix depends on the total pixel number of the rectangular image and the total pixel number of the NLOS scene to be imaged, the number of lines is the total pixel number of the diffuse reflection image obtained in the step 1, and the number of columns is the total pixel number of the NLOS scene to be imaged; each element in the light transmission matrix represents a conversion relation from one pixel information in an NLOS scene to be imaged to one pixel information of a diffuse reflection image, and the conversion relation is calculated by a light radiation propagation formula in a ray tracing algorithm;
step 6: and (4) establishing an optimization problem according to the rectangular image obtained in the step (2) and the optical transmission matrix obtained in the step (5), and solving the optimization problem to obtain an NLOS scene image.
2. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: before step 3 is performed, the rectangular image may be down-sampled according to actual requirements.
3. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: and the size and the spatial position of the region where the NLOS scene to be imaged is located and the size and the spatial position of the rectangular image are determined under the same coordinate system.
4. The ray tracing algorithm-based non-visual field imaging method according to claim 1, wherein: the conversion relation is expressed by a conversion matrix T:
in the formula, any pixel point p on the diffuse reflection imageiWherein i ∈ [1, m n]From any pixel point p 'on the NLOS scene image to be imaged'jWherein j ∈ [1, x y]ρ represents the reflectivity of the diffuse reflecting surface, θ' represents the angle between the incident ray and the normal of the NLOS scene surface, wiIs the direction vector of the incident ray, woIs the direction vector of the reflected light, Lo(p,wo) Representing the edge w at the point p of the diffuse reflection surfaceoBrightness of directionally reflected light, thetaiThe included angle between the incident light direction vector and the normal vector of the diffuse reflection material at the position p is included, and s is the area of the surface of the NLOS scene to be solved;
assuming that the diffuse reflection image is represented as an image matrix d and the NLOS scene to be imaged is represented as an image matrix f, the relationship is expressed as follows:
d=Tf (9)。
5. the ray tracing algorithm-based non-visual field imaging method according to claim 4, wherein: the step 6 specifically includes the following substeps:
judging whether the conversion matrix T is reversible, if so, calculating according to the formula (9) to obtain an image matrix f corresponding to the NLOS scene to be imaged;
f=T-1d (10)
if the current is not reversible, the following steps are carried out:
establishing a corresponding optimization problem according to the type of the diffuse reflection image, and solving to obtain an image matrix f corresponding to the NLOS scene to be imaged:
f=argmin(||Tf-d||2+λ1||f||TV+λ2B) (12)
wherein | f | purpleTVThe total variation of the image matrix f is calculated as follows:
b is a barrel function, and the calculation method is as follows:
wherein λ is1,λ2For regularizing coefficients, fi,jThe i rows and j columns of pixel values of the image matrix f, x and y respectively represent the total row number and the total column number of the image matrix f.
6. The ray tracing algorithm-based non-visual field imaging method according to claim 5, wherein: the diffuse reflection image comprises any one of a single-channel gray image, an RGB three-channel color image and an RGBG four-channel color image in a Bayer filtering mode.
7. The ray tracing algorithm-based non-visual field imaging method according to claim 6, wherein:
if the diffuse reflection image is a single-channel gray image, establishing an optimization problem, and solving to obtain a matrix f corresponding to the NLOS scene image; if the diffuse reflection image is an RGB three-channel color image, separating three-channel data, respectively establishing an optimization problem for the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image; and if the diffuse reflection image is an RGBG four-channel color image in a Bayer filtering mode, separating data of four channels, averaging data of two G channels to obtain data of three channels, establishing an optimization problem aiming at the three channels, solving to obtain three-channel data, and combining the three-channel data to obtain a matrix f corresponding to the NLOS scene image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010104486.9A CN111340929B (en) | 2020-02-20 | 2020-02-20 | Non-vision field imaging method based on ray tracing algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010104486.9A CN111340929B (en) | 2020-02-20 | 2020-02-20 | Non-vision field imaging method based on ray tracing algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111340929A true CN111340929A (en) | 2020-06-26 |
CN111340929B CN111340929B (en) | 2022-11-25 |
Family
ID=71187819
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010104486.9A Active CN111340929B (en) | 2020-02-20 | 2020-02-20 | Non-vision field imaging method based on ray tracing algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111340929B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784815A (en) * | 2020-07-03 | 2020-10-16 | 哈尔滨工业大学 | Passive non-vision field penumbra imaging method based on transmission window |
CN113052833A (en) * | 2021-04-20 | 2021-06-29 | 东南大学 | Non-vision field imaging method based on infrared thermal radiation |
CN113093389A (en) * | 2021-04-15 | 2021-07-09 | 东南大学 | Holographic waveguide display device based on non-visual field imaging and method thereof |
CN113109787A (en) * | 2021-04-15 | 2021-07-13 | 东南大学 | Non-vision field imaging device and method based on thermal imaging camera |
CN113109948A (en) * | 2021-04-15 | 2021-07-13 | 东南大学 | Polarization non-visual field imaging method based on diffuse reflection surface |
CN113138027A (en) * | 2021-05-07 | 2021-07-20 | 东南大学 | Far infrared non-vision object positioning method based on bidirectional refractive index distribution function |
CN113204010A (en) * | 2021-03-15 | 2021-08-03 | 锋睿领创(珠海)科技有限公司 | Non-visual field object detection method, device and storage medium |
CN113344774A (en) * | 2021-06-16 | 2021-09-03 | 东南大学 | Non-visual field imaging method based on depth convolution inverse graph network |
CN113411508A (en) * | 2021-05-31 | 2021-09-17 | 东南大学 | Non-vision field imaging method based on camera brightness measurement |
WO2023279249A1 (en) * | 2021-07-05 | 2023-01-12 | Shanghaitech University | Non-line-of-sight imaging via neural transient field |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198511A (en) * | 2011-09-15 | 2013-07-10 | 佳能株式会社 | Image processing apparatus and image processing method |
JP2015114775A (en) * | 2013-12-10 | 2015-06-22 | キヤノン株式会社 | Image processor and image processing method |
US20190287294A1 (en) * | 2018-03-17 | 2019-09-19 | Nvidia Corporation | Reflection denoising in ray-tracing applications |
-
2020
- 2020-02-20 CN CN202010104486.9A patent/CN111340929B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103198511A (en) * | 2011-09-15 | 2013-07-10 | 佳能株式会社 | Image processing apparatus and image processing method |
JP2015114775A (en) * | 2013-12-10 | 2015-06-22 | キヤノン株式会社 | Image processor and image processing method |
US20190287294A1 (en) * | 2018-03-17 | 2019-09-19 | Nvidia Corporation | Reflection denoising in ray-tracing applications |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784815A (en) * | 2020-07-03 | 2020-10-16 | 哈尔滨工业大学 | Passive non-vision field penumbra imaging method based on transmission window |
CN111784815B (en) * | 2020-07-03 | 2024-08-09 | 哈尔滨工业大学 | Transmission window-based passive non-vision penumbra imaging method |
CN113204010B (en) * | 2021-03-15 | 2021-11-02 | 锋睿领创(珠海)科技有限公司 | Non-visual field object detection method, device and storage medium |
CN113204010A (en) * | 2021-03-15 | 2021-08-03 | 锋睿领创(珠海)科技有限公司 | Non-visual field object detection method, device and storage medium |
CN113109787A (en) * | 2021-04-15 | 2021-07-13 | 东南大学 | Non-vision field imaging device and method based on thermal imaging camera |
CN113109948A (en) * | 2021-04-15 | 2021-07-13 | 东南大学 | Polarization non-visual field imaging method based on diffuse reflection surface |
CN113093389A (en) * | 2021-04-15 | 2021-07-09 | 东南大学 | Holographic waveguide display device based on non-visual field imaging and method thereof |
CN113109787B (en) * | 2021-04-15 | 2024-01-16 | 东南大学 | Non-visual field imaging device and method based on thermal imaging camera |
CN113052833A (en) * | 2021-04-20 | 2021-06-29 | 东南大学 | Non-vision field imaging method based on infrared thermal radiation |
CN113138027A (en) * | 2021-05-07 | 2021-07-20 | 东南大学 | Far infrared non-vision object positioning method based on bidirectional refractive index distribution function |
CN113411508A (en) * | 2021-05-31 | 2021-09-17 | 东南大学 | Non-vision field imaging method based on camera brightness measurement |
CN113344774A (en) * | 2021-06-16 | 2021-09-03 | 东南大学 | Non-visual field imaging method based on depth convolution inverse graph network |
WO2023279249A1 (en) * | 2021-07-05 | 2023-01-12 | Shanghaitech University | Non-line-of-sight imaging via neural transient field |
Also Published As
Publication number | Publication date |
---|---|
CN111340929B (en) | 2022-11-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111340929B (en) | Non-vision field imaging method based on ray tracing algorithm | |
KR102483838B1 (en) | Multi-Baseline Camera Array System Architecture for Depth Augmentation in VR/AR Applications | |
Baradad et al. | Inferring light fields from shadows | |
Zhang et al. | 3D single-pixel video | |
US7792367B2 (en) | System, method and apparatus for image processing and image format | |
US9817159B2 (en) | Structured light pattern generation | |
CN105790836B (en) | Using the presumption of the surface properties of plenoptic camera | |
US10996752B1 (en) | Infrared transparent backlight device for eye tracking applications | |
Kronander et al. | A unified framework for multi-sensor HDR video reconstruction | |
US20040070565A1 (en) | Method and apparatus for displaying images | |
US20100080453A1 (en) | System for recovery of degraded images | |
US20100103175A1 (en) | Method for generating a high-resolution virtual-focal-plane image | |
WO2018235163A1 (en) | Calibration device, calibration chart, chart pattern generation device, and calibration method | |
JP6786225B2 (en) | Image processing equipment, imaging equipment and image processing programs | |
US10013761B2 (en) | Automatic orientation estimation of camera system relative to vehicle | |
KR20170017586A (en) | Method for assuming parameter of 3d display device and 3d display device thereof | |
US20200035022A1 (en) | System for acquiring correspondence between light rays of transparent object | |
EP3144894B1 (en) | Method and system for calibrating an image acquisition device and corresponding computer program product | |
US11601607B2 (en) | Infrared and non-infrared channel blender for depth mapping using structured light | |
Liu et al. | Optical distortion correction considering radial and tangential distortion rates defined by optical design | |
US10712572B1 (en) | Angle sensitive pixel array including a liquid crystal layer | |
EP3009887A1 (en) | Optical imaging processing system | |
US20170256208A1 (en) | Media item relighting technique | |
EP3944182A1 (en) | Image restoration method and device | |
US11004222B1 (en) | High speed computational tracking sensor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |