CN117437347A - Target reconstruction method based on differentiable SAR image renderer - Google Patents

Target reconstruction method based on differentiable SAR image renderer Download PDF

Info

Publication number
CN117437347A
CN117437347A CN202210820257.6A CN202210820257A CN117437347A CN 117437347 A CN117437347 A CN 117437347A CN 202210820257 A CN202210820257 A CN 202210820257A CN 117437347 A CN117437347 A CN 117437347A
Authority
CN
China
Prior art keywords
target
map
formula
shadow
renderer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210820257.6A
Other languages
Chinese (zh)
Inventor
徐丰
符士磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN202210820257.6A priority Critical patent/CN117437347A/en
Publication of CN117437347A publication Critical patent/CN117437347A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention provides a target reconstruction method based on a differential SAR image renderer, which is used for reconstructing a three-dimensional target through the differential renderer according to an SAR image and comprises the following steps of: step S1, defining a three-dimensional rendering scene; step S2, reconstructing the target, wherein in the step S2, a contour map is adopted for reconstructing a non-ground target, and an irradiation map and a shadow are adopted for reconstructing a ground target, wherein the reconstruction process in the step S2 comprises the following steps: s2-1, extracting a contour map or an illumination map and shadows from the SAR image as true values according to the target type, and sending the true values to a differential renderer as a rendering target; and S2-2, initializing and inputting the differentiable renderer into a spherical grid, transmitting errors between the rendered image and the true value of the differentiable renderer to the inputted scene parameters along the reverse direction of the forward rendering pipeline of the differentiable renderer, modifying the values of the scene parameters for a plurality of times through a gradient descent algorithm, and reconstructing to obtain the three-dimensional scene conforming to the target two-dimensional image.

Description

Target reconstruction method based on differentiable SAR image renderer
Technical Field
The invention belongs to the technical field of radar image processing, and particularly relates to a target reconstruction method based on a differentiable SAR image renderer.
Background
Synthetic aperture radar (SyntheticAperture Radar, SAR) has become an important tool for global remote sensing, which enables high resolution imaging during all day and all weather conditions. The conventional SAR can only acquire two-dimensional SAR images, and the side-view imaging mode causes perspective shrinkage (foreshortening) and overlap masking (layover) effects. In order to overcome the problems, students propose three-dimensional (Three Dimensionality, 3D) SAR systems and imaging methods, hope to directly acquire a 3D electromagnetic scattering structure of a target, eliminate shrinkage, overlay and other effects of SAR images caused by an imaging mechanism, and have important significance for target interpretation, remote sensing mapping, digital cities and the like.
Traditional SAR three-dimensional imaging takes a point cloud as a primary three-dimensional representation. The earliest three-dimensional imaging was represented by Interferometric SAR (InSAR) and stereo SAR (StereoSAR), which resolve the three-dimensional position of individual scatterers from registered multi-view pixels. For complex scenes such as urban building areas, there are cases where echoes of multiple scatterers of different heights are mapped to the same resolution unit, i.e. the overlay effect. These two techniques cannot handle the overlay and therefore do not have three-dimensional resolution. With the first proposal of the three-dimensional SAR concept by knaall, related research work has been carried out in various countries, and two three-dimensional imaging technologies, namely, chromatographic SAR (TomoSAR) and array interference SAR (Array InSAR), are mainly formed. By multiple observations, both methods synthesize apertures at elevation, achieving resolution at high Cheng Xiangshang.
Since 2012 CNN was successfully applied to image classification tasks for the first time, deep learning has achieved tremendous success in the field of computer vision. As data-driven approaches have met with tremendous success, more and more research has used deep neural networks to reconstruct 3D geometry from 2D images. To train the network, there are two ways of 3D supervision, the former of which needs to provide 3D shape truth values, and 2D supervision, the latter of which only needs to provide 2D images, which is more desirable. In the field of SAR 3D reconstruction, scholars have also explored deep learning methods.
SAR image sample acquisition is difficult, and multiple strong coherent complex SAR data acquisition is more difficult. In order to ensure the coherence between the SAR images observed by the tomoSAR heavy rail, the track needs to be precisely controlled, and the implementation difficulty is high, so that the SAR three-dimensional imaging technology represented by the tomoSAR is not suitable for application scenes with high timeliness requirements. Unlike optical images, due to the scattering properties and imaging mechanisms of the unique microwave frequency bands of SAR, the target morphology in SAR images varies greatly, which results in that three-dimensional reconstructed neural network models pre-trained based on optical images may not be directly used for processing SAR images. In order to solve the above problems, a target reconstruction algorithm that fits the SAR imaging geometry, utilizes SAR intensity maps, and does not require model truth supervision training is needed.
Disclosure of Invention
The present invention has been made to solve the above-mentioned problems, and an object of the present invention is to provide a target reconstruction method based on a differentiable SAR image renderer.
The invention provides a target reconstruction method based on a differential SAR image renderer, which is used for reconstructing a three-dimensional target through the differential renderer according to an SAR image and has the characteristics that: step S1, defining a three-dimensional rendering scene of a differential renderer; step S2, reconstructing the target, wherein in step S2, a contour map is adopted for reconstructing a non-ground target, an illumination map and a shadow are adopted for reconstructing a ground target, and the steps of reconstructing the ground target by adopting the illumination map and the shadow include: an illumination pattern-based target reconstruction method, a shadow-based target reconstruction method, and an illumination pattern and shadow-based target reconstruction method, wherein the reconstruction process in step S2 includes: s2-1, extracting a contour map or an illumination map and shadows from SAR images as true values according to different types of targets, and sending the true values to a differentiable renderer as a rendering target; and S2-2, initializing and inputting the differentiable renderer into a spherical grid, transmitting errors between the rendered image and the true value of the differentiable renderer to the inputted scene parameters along the reverse direction of the forward rendering pipeline of the differentiable renderer, modifying the values of the scene parameters for a plurality of times through a gradient descent algorithm, and reconstructing to obtain the three-dimensional scene conforming to the target two-dimensional image.
The target reconstruction method based on the differentiable SAR image renderer provided by the invention can also have the following characteristics: wherein step S1 comprises the sub-steps of: step A1, establishing a world coordinate system O-XYZ, and placing a target to be imaged at an origin O; step A2, setting a nominal position O ' of the radar, wherein the movement direction of the radar is O ' X ', and the irradiation direction is O ' Z ', and determining through O ' X ' and O ' Z 'The direction of the third axis O ' Y ', a radar coordinate system O ' -X ' Y ' Z ' with the radar as the center is established, the plane O ' Z ' X ' is defined as a mapping plane according to a projection mapping algorithm, and the (k, l) th mapping unit is expressed as m (k,l) The plane O ' X ' Y ' is defined as the projection plane, and the (i, l) th element is denoted as p (i,l) The method comprises the steps of carrying out a first treatment on the surface of the Step A3, selecting a grid as a representation mode of an imaging target, wherein the representation mode comprises a vertex setAnd a triangle face element set ++>I.e. there is N v Vertex sum N f Individual face elements->Representing the spatial coordinates of the ith vertex, +.>Index representing three vertices belonging to the jth triangular bin, each triangular bin f j With a texture attribute S j And S is combined with j Set as a scalar; step A4, vertex set of mesh target +.>Transformation to vertex { v } under radar coordinate system i Then bin f j Vertex coordinates M of (2) j The definition formula of (2) is as follows:
in the formula (1), (x) j,n ,y j,n ,z j,n ) Is f j X, y, z coordinates of the nth vertex of (c).
The target reconstruction method based on the differentiable SAR image renderer provided by the invention can also have the following characteristics: the reconstructing with the contour map in step S2 includes the following sub-steps:
step B1, transforming the inclined distance under the radar coordinate system to obtain a bin f j Comprising a mapping unit m (k,l) The likelihood of (1) is defined as probabilityEuclidean distance d (m) (k,l) ,f j ) The calculation formula is as follows:
in the formula (2),for one sign indicating bit, representing the mapping unit m (k,l) At f j Whether the interior or exterior of (a) is,sigma is a scalar that controls the sharpness of the probability distribution, when sigma-0, the probability map converges to the exact shape of the bin boundary;
step B2, the profile is the two-dimensional projection of the target on the mapping plane, and the value of each resolution unit in the profile is takenThe definition formula of (2) is as follows:
in the formula (3), N f Representing the number of bins in the mesh target;
step B3, considerFor connecting profiles->And the coordinates M j Intermediate variable of>For M j The derivative of (c) is defined as follows:
step B4, defining a mixed loss function for supervising the geometric reconstruction process, wherein the mixed loss function not only measures the error between the rendered image and the true value, but also constrains the smoothness of the surface element, and the formula is as follows:
L=L sil1 L lap2 L flat (5)
in the formula (5), lambda 1 =0.03,λ 2 =0.0003, the weight decreases with decreasing importance, L sil Contour map I for rendering sil Sum profile truth valueThe negative cross-over ratio between the two represents the difference between the rendering profile and the truth profile, and the formula is as follows:
in the formula (6), the term "+. sil Andthe higher the overlap of L sil The smaller the value of (c) is,
in the formula (5), L lap For estimating the distance between adjacent nodes, L is the sum of squares of coordinates in the Laplace transform domain lap The smaller the value, the more compact in the mesh bin space, L lap The definition formula of (2) is as follows:
in the formula (7) of the present invention,is the coordinate of the grid vertex set V after Laplacian transformation, < >>Is->Coordinates of an nth axis of the ith vertex of (a);
in the formula (5), L flat For the sum of squares of cosine values of included angles of adjacent surface elements, the definition formula is as follows:
in equation (8), E is the set of all edges in the deformed mesh, θ i Is the included angle between two triangular surface elements sharing the ith edge;
step B5, after each iteration, obtaining that the correction of the target according to the current rendered image is gradientOpposite surface element f j For its coordinate matrix M according to the transferred gradient j The formula is as follows:
in the formula (9), the left side of the formula is the coordinates after adjustment, the right side is the coordinates before adjustment, and μ represents the learning rate when the gradient is decreasing.
The target reconstruction method based on the differentiable SAR image renderer provided by the invention can also have the following characteristics: the target reconstruction method based on the irradiation map in the step S2 comprises the following substeps:
step C1, each resolution unit m in the rendered image rendered by the differential renderer (k,l) Cumulative scatter value atThe definition formula of (2) is as follows:
step C2, the illumination pattern is defined as the radar visible region, i.eCorresponding region, pair mapping unit m (k,l) Bin f j Contribution to it->Is defined as:
the definitions of the two components in equation (11) are as follows:
in the formula (12) of the present invention,representing the (i, l) th projection unit p (i,l) Assigned to f j Energy of->Representation->For nearby m (k,l) Is proportional to p (i,l) Reflection point and m on mapping plane (k,l) The distance between the two is related to each other,
for m (k,l) At all N f In the individual bins, there is a probability that at least one bin is illuminatedThe formula is as follows:
step C3, the bias of the illumination map to the element grid is defined as:
step C4, defining a hybrid loss function for supervising the geometrical reconstruction process, the hybrid loss function measuring the error between the rendered image of the illumination map and the true value, the formula being as follows:
L=L ill1 L lap2 L flat (16)
in the formula (16), L ill Is a rendered illumination map I ill And illumination pattern truth valueThe negative cross-over ratio is used for measuring the difference between the rendered image and the true value, and the definition formula is as follows:
step C5, rendering the imageAmplitude and intensity->The probability is positive correlation, an illumination map is made according to a target image of the SAR image and is used as a true value, the made illumination map is used as a rendering target of a differentiable renderer, and the geometric coordinate M of each bin is adjusted in multiple iterations j And obtaining the three-dimensional target after completing convergence.
The target reconstruction method based on the differentiable SAR image renderer provided by the invention can also have the following characteristics: the shadow-based target reconstruction method in step S2 includes the following sub-steps:
step D1, the surface element f j After projection and skew transformation to the mapping plane, it is defined with the mapping unit m (k,l) The probability of intersection in the mapping plane isThe ground clearance profile is defined as:
the upper boundary of the shadow is also the illumination area I ill Is a boundary dividing a bright area and a dark area, and the shadow is defined as the furthest boundary projected on the ground depending on the target along the radar beam irradiation direction:
I sha =I gsil -I gsil ⊙I ill (19)
step D2, for each cell of the shadow region, the gradient calculation for each bin is independent of each other,opposite surface element f j The derivatives of the coordinate matrix of (a) are:
step D3, defining a mixing loss function for supervising the geometrical reconstruction process, wherein L sha Is a rendered shadow map I sha True value of sum shadow mapNegative cross-over ratio between the two;
and D4, segmenting the shadow of the corresponding target from the multi-view SAR image, taking the segmentation result as a true value, taking the segmentation result as a rendering target of the differential renderer, inverting a grid model corresponding to the shadow, completing three-dimensional reconstruction, and quantitatively evaluating the reconstruction result after the reconstruction is completed.
The target reconstruction method based on the differentiable SAR image renderer provided by the invention can also have the following characteristics: the target reconstruction method based on the irradiation pattern and the shadow in the step S2 comprises the following substeps:
step E1, consider rendering illumination pattern I ill And rendering shadow map I shd True value of irradiation patternAnd shadow map true value->Mutually exclusive, defining a mixed loss function L for fusing the illumination pattern and the shadow comb For replacing L ill Or L sha The formula is as follows:
L comb opposite surface element f j Coordinate matrix M of (2) j The bias leads of (2) are:
e2, dividing the SAR image with multiple views to obtain shadows of the corresponding targets, linearly scaling the SAR image to obtain an illumination map of the targets, taking the shadows and the illumination map which are manufactured by the SAR image as true values, and taking the true values as rendering targets of the differential renderer to invert out the corresponding grid models to finish three-dimensional reconstruction.
Effects and effects of the invention
According to the target reconstruction method based on the differentiable SAR image renderer, the expressions of the outline map, the illumination map and the shadow are provided and realized in the differentiable renderer. In the differential renderer, a continuous function between the two-dimensional image and the three-dimensional scene element is established through probability approximation of a forward process, errors between the rendered image and a true value are transferred to input scene parameters along the reverse direction of a forward rendering pipeline, and the three-dimensional scene conforming to the two-dimensional image can be reconstructed through modifying the values of the parameters for a plurality of times through a gradient descent algorithm. The target reconstruction method based on the differentiable SAR image renderer can be suitable for SAR images of any platform, has high robustness and good compatibility, and has popularization and application prospects.
Drawings
FIG. 1 is a representation of a relevant coordinate system definition of a three-dimensional rendered scene in an embodiment of the invention;
FIG. 2 is a schematic diagram of a process for reconstructing a target in an embodiment of the invention;
FIG. 3 is a computational framework diagram of geometric reconstruction based on a contour map in an embodiment of the present invention;
FIG. 4 is a T72 mesh model reconstructed using a contour map in an embodiment of the invention;
FIG. 5 is a schematic diagram of a comparison of an illumination pattern and a profile pattern in an embodiment of the invention;
FIG. 6 is a comparison of the reconstruction results of a profile map and an illumination map in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a target shadow in an embodiment of the invention;
FIG. 8 is a rendering framework diagram of shadows in an embodiment of the present invention;
FIG. 9 is a comparison of shadow simulation values to true values in an embodiment of the invention;
FIG. 10 is a model of a target reconstructed using shadows in an embodiment of the present invention;
FIG. 11 is a model of a target reconstructed using shadows and an illumination map in an embodiment of the present invention;
FIG. 12 is a comparison of SAR images rendered using the reconstructed T72 model with true values in an embodiment of the present invention.
Detailed Description
In order to make the technical means and effects of the present invention easy to understand, the present invention will be specifically described with reference to the following examples and the accompanying drawings.
< example >
The target reconstruction method based on the differential SAR image rendering of the embodiment is used for reconstructing a three-dimensional target through a differential renderer according to the SAR image, and comprises the following steps of:
step S1, defining a three-dimensional rendering scene of the differential renderer.
FIG. 1 is a representation of a three-dimensional rendering of a scene in an embodiment of the invention.
In fig. 1, including the grid target to be imaged, the position and movement direction of the radar, the imaging area, etc., as shown in fig. 1, step S1 includes the following sub-steps:
step A1, establishing a world coordinate system O-XYZ, and placing a target to be imaged at an origin O;
step A2, setting a radar nominal position O ', setting the radar moving direction as O' X ', setting the radar irradiation direction as O' Z ', determining the direction of a third axis O' Y 'through O' X 'and O' Z ', establishing a radar coordinate system O' -X 'Y' Z 'centering on the radar, defining a plane O' Z 'X' as a mapping plane according to a projection mapping algorithm, and expressing a (k, l) th mapping unit as m (k,l) The plane O ' X ' Y ' is defined as the projection plane, and the (i, l) th element is denoted as p (i,l)
Step A3, selecting a grid as a representation mode of an imaging target, wherein the representation mode comprises a vertex setAnd a triangle face element set ++>I.e. there is N v Vertex sum N f Individual face elements->Representing the spatial coordinates of the ith vertex, +.>Index representing three vertices belonging to the jth triangular bin, each triangular bin f j With a texture attribute S j And S is combined with j Set as a scalar;
step A4, collecting the vertexes of the grid targetsTransformation to vertex { v } under radar coordinate system i Then bin f j Vertex coordinates M of (2) j The definition formula of (2) is as follows:
in the formula (1), (x) j,n ,y j,n ,z j,n ) Is f j X, y, z coordinates of the nth vertex of (c).
Step S2, reconstructing the object,
in step S2, reconstructing the non-ground target by using a contour map, reconstructing the ground target by using an illumination map and a shadow, and reconstructing the ground target by using the illumination map and the shadow includes: an illumination pattern-based target reconstruction method, a shadow-based target reconstruction method, and an illumination pattern and shadow-based target reconstruction method.
FIG. 2 is a schematic diagram of a process for reconstructing a target in an embodiment of the invention.
As shown in fig. 2, the reconstruction process in step S2 includes:
s2-1, extracting a contour map or an illumination map and shadows from SAR images as true values according to different types of targets, and sending the true values to a differentiable renderer as a rendering target;
and S2-2, initializing and inputting the differentiable renderer into a spherical grid, transmitting errors between the rendered image and the true value of the differentiable renderer to the inputted scene parameters along the reverse direction of the forward rendering pipeline of the differentiable renderer, modifying the values of the scene parameters for a plurality of times through a gradient descent algorithm, and reconstructing to obtain the three-dimensional scene conforming to the target two-dimensional image.
FIG. 3 is a computational framework diagram of geometric reconstruction based on a contour map in an embodiment of the present invention.
As shown in fig. 3, when the contour map is adopted for geometric reconstruction, the contour image obtained by rendering is compared with the contour truth image, the difference between the contour image and the contour truth image is reversely transferred to the input by using a Back Propagation (BP) algorithm, and the unknown input geometric shape is inferred by correcting the input scene parameters, which comprises the following steps:
the reconstruction in step S2 using the contour map comprises the following sub-steps:
step B1, transforming the inclined distance under the radar coordinate system to obtain a bin f j Comprising a mapping unit m (k,l) The likelihood of (1) is defined as probabilityEuclidean distance d (m) (k,l) ,f j ) The calculation formula is as follows:
in the formula (2),for one sign indicating bit, representing the mapping unit m (k,l) At f j Whether the interior or exterior of (a) is,sigma is a scalar that controls the sharpness of the probability distribution, when sigma-0, the probability map converges to the exact shape of the bin boundary;
step B2, the profile map is a two-dimensional projection of the target on the mapping plane, and is irrelevant to the texture of the surface element and the depth of the surface element from the imaging plane. Value of each resolution unit in contour mapThe definition formula of (2) is as follows:
in the formula (3), N f Representing the number of bins in the mesh target;
step B3, considerFor connecting profiles->And the coordinates M j Intermediate variable of>For M j The derivative of (c) is defined as follows:
step B4, defining a mixed loss function for supervising the geometric reconstruction process, wherein the mixed loss function not only measures the error between the rendered image and the true value, but also constrains the smoothness of the surface element, and the formula is as follows:
L=L sil1 L lap2 L flat (5)
in the formula (5), lambda 1 =0.03,λ 2 =0.0003, the weight decreases with decreasing importance, its choice also being related to the absolute size of the respective loss function, due to L lap And L flat The value is too large, so that the contribution ratio of the values to L is reduced. L (L) sil Contour map I for rendering sil Sum profile truth valueThe negative cross-over ratio between the two represents the difference between the rendering profile and the truth profile, and the formula is as follows:
in the formula (6), the term "+. sil Andthe higher the overlap of L sil The smaller the value of (c) is,
in the formula (5), L lap For estimating the distance between adjacent nodes, L is the sum of squares of coordinates in the Laplace transform domain lap The smaller the value, the more compact in the mesh bin space, L lap The definition formula of (2) is as follows:
in the formula (7) of the present invention,is the coordinate of the grid vertex set V after Laplacian transformation, < >>Is->Coordinates of an nth axis of the ith vertex of (a);
in the formula (5), L flat For the sum of squares of cosine values of included angles of adjacent surface elements, the definition formula is as follows:
in equation (8), E is the set of all edges in the deformed mesh, θ i Is the angle between two triangular elements sharing the ith edge. Reducing L flat The aim of (a) is to make as many of the surface elements as possible coplanar so that the deformed grid of the output becomes smoother.
Step B5, after each iteration, obtaining that the correction of the target according to the current rendered image is gradientOpposite surface element f j For its coordinate matrix M according to the transferred gradient j The formula is as follows:
in the formula (9), the left side of the formula is the coordinates after adjustment, the right side is the coordinates before adjustment, and μ represents the learning rate when the gradient is decreasing.
In the embodiment, the influence of different batch sizes and different loss functions on the reconstruction result is considered, a T72 tank is used as a reconstruction target, and a group of comparison experiments are set. For each object, a contour from 32 view angles is used for 3D reconstruction.
FIG. 4 is a T72 mesh model reconstructed using a contour map in an embodiment of the invention.
In FIG. 4, (a) is the T72 true value, (b), (c) and (d) are the reconstruction results when the batch sizes are 1, 4 and 8, respectively, (e), (f) and (g) are bs fixed to 8, respectively, and the mixed function is deletedL in the number flat 、L lap Deleting L simultaneously flat And L lap Is a result of the reconstruction of (a).
As shown in fig. 4, the geometry of 3D vehicles reconstructed using bs of different sizes is not greatly different. Fix batch size to bs=8 and delete L in the mixed loss function separately flat And L lap When partial reconstruction is carried out, finding L according to the reconstruction result flat The components have a greater positive impact on the surface smoothness of the target grid.
The target reconstruction method based on the irradiation map in step S2 includes the following sub-steps:
step C1, each resolution unit m in the rendered image rendered by the differential renderer (k,l) Cumulative scatter value atThe definition formula of (2) is as follows:
step C2, the illumination pattern is defined as the radar visible region, i.eCorresponding region, pair mapping unit m (k,l) Bin f j Contribution to it->Is defined as:
the definitions of the two components in equation (11) are as follows:
in the formula (12) of the present invention,representing the (i, l) th projection unit p (i,l) Assigned to f j Energy of->Representation ρ j (i,l) For nearby m (k,l) Is proportional to p (i,l) Reflection point and m on mapping plane (k,l) The distance between the two is related to each other,
for m (k,l) At all N f In the individual bins, there is a probability that at least one bin is illuminatedThe formula is as follows:
step C3, the bias of the illumination map to the element grid is defined as:
step C4, defining a hybrid loss function for supervising the geometrical reconstruction process, the hybrid loss function measuring the error between the rendered image of the illumination map and the true value, the formula being as follows:
L=L ill1 L lap2 L flat (16)
in the formula (16), L ill Is a rendered illumination map I ill And illumination pattern truth valueThe negative cross-over ratio is used for measuring the difference between the rendered image and the true value, and the definition formula is as follows:
step C5, rendering the imageAmplitude and intensity->The probability is positive correlation, an illumination map is made according to a target image of the SAR image and is used as a true value, the made illumination map is used as a rendering target of a differentiable renderer, and the geometric coordinate M of each bin is adjusted in multiple iterations j And obtaining the three-dimensional target after completing convergence.
Fig. 5 is a schematic diagram of a comparison of an illumination pattern and a profile pattern in an embodiment of the present invention.
As shown in fig. 5, profile I sil The process of generation is equivalent to compressing the target in the slope distance O ' Z ' in the O ' Y ' Z ' plane, and the target surface element facing away from the radar is mapped into a contour map, which also results in the target surface element according to I sil The contour map generated by the formula covers part of shadow areas, and the target image in the SAR image is restated to be an illumination map I ill
FIG. 6 is a comparison of the reconstruction of a profile map and an illumination map in an embodiment of the present invention.
In fig. 6, (a), (b), and (c) are true values of the corresponding target, reconstruction results using a contour map, and reconstruction results using an irradiation map, respectively.
As shown in FIG. 6, by comparing the reconstruction results, when the height of the target is higher, the reconstruction results are not as good as those of the illumination map because the labeled contour map has a larger true value difference from the contour map; for the object which is not high, the model of the contour map reconstruction is better than that of the illumination map reconstruction, the detail is better, and the surface is smoother.
The shadow-based target reconstruction method in step S2 comprises the sub-steps of:
step D1, the surface element f j After projection and skew transformation to the mapping plane, it is defined with the mapping unit m (k,l) The probability of intersection in the mapping plane isThe ground clearance profile is defined as:
FIG. 7 is a schematic diagram of a target shadow in an embodiment of the invention.
As shown in FIG. 7, the upper boundary of the shadow is also the illumination area I ill Is a boundary dividing a bright area and a dark area, and the shadow is defined as the furthest boundary projected on the ground depending on the target along the radar beam irradiation direction:
I sha =I gsil -I gsil ⊙I ill (19)
step D2, for each cell of the shadow region, the gradient calculation for each bin is independent of each other,opposite surface element f j The derivatives of the coordinate matrix of (a) are:
step D3, defining a mixing loss function for supervising the geometrical reconstruction process, wherein L sha Is a rendered shadow map I sha True value of sum shadow mapNegative cross-over ratio between the two;
and D4, segmenting the shadow of the corresponding target from the multi-view SAR image, taking the segmentation result as a true value, taking the segmentation result as a rendering target of the differential renderer, inverting a grid model corresponding to the shadow, completing three-dimensional reconstruction, and quantitatively evaluating the reconstruction result after the reconstruction is completed.
FIG. 8 is a rendering framework diagram of shadows in an embodiment of the present invention.
As shown in fig. 8, the rendering of shadows includes two lines: (1) Rendering I according to the expression of the irradiation map ill The method comprises the steps of carrying out a first treatment on the surface of the (2) The object obtains a ground distance profile map I through ground distance and inclined distance projection gsil . Based on I ill And I gsil Shadow I can be determined sha Upper and lower bounds of (2).
FIG. 9 is a comparison of shadow simulation values to true values in an embodiment of the invention.
As shown in fig. 9, as shown in fig. 9 (a), first, SAR images of an aircraft model at a plurality of angles are obtained, and the aircraft casts a shadow on the ground in a direction facing away from radar irradiation due to the shielding of the aircraft itself. By image processing, a portion having a low scattering intensity is segmented from the ground, as shown in fig. 9 (b). The shadow simulated by equation (19) according to the present embodiment is smoother than the shadow true value obtained by segmentation, as shown in fig. 9 (c). And shadow blocks in shadow truth values are not coherent, but high probability region shadow simulation values are consistent with truth values.
FIG. 10 is a model of a target reconstructed using shadows in an embodiment of the present invention.
In fig. 10, the left image is the target truth model, and the right image is the target model reconstructed with shadows.
As shown in fig. 10, six targets such as cubes are taken as examples, and the SAR image thereof is simulated in the case of determining the ground. And dividing the SAR image from multiple views to obtain shadows of the corresponding targets, and inverting grid models corresponding to the shadows by taking the division result as a true value to finish three-dimensional reconstruction.
The target reconstruction method based on the irradiation map and the shadow in the step S2 comprises the following substeps:
step E1, consider rendering illumination pattern I ill And rendering shadow map I shd Photo takingTrue value of jet mapAnd shadow map true value->Mutually exclusive, defining a mixed loss function L for fusing the illumination pattern and the shadow comb For replacing L ill Or L sha The formula is as follows:
L comb opposite surface element f j Coordinate matrix M of (2) j The bias leads of (2) are:
e2, dividing the SAR image with multiple views to obtain shadows of the corresponding targets, linearly scaling the SAR image to obtain an illumination map of the targets, taking the shadows and the illumination map which are manufactured by the SAR image as true values, and taking the true values as rendering targets of the differential renderer to invert out the corresponding grid models to finish three-dimensional reconstruction.
FIG. 11 is a model of a target reconstructed using shadows and an illumination map in an embodiment of the present invention.
In fig. 11, the left image is a target truth model, and the right image is a target model reconstructed using shadows.
As shown in fig. 11, a target image and a shadow are respectively obtained by segmentation from the multi-view SAR image, and then sent into a differentiable renderer for inversion to obtain a grid model corresponding to the target.
In this embodiment, a target reconstruction is performed on a T72 tank by a target reconstruction method based on an illumination map and a shadow, and a reconstructed T72 model is used to generate a SAR image and compare with a true value of the SAR image, which specifically includes the following steps:
step 1, selecting samples under 8 viewing angles with evenly distributed azimuth angles from a true value of MSTART72, and extracting a T72 target image and shadows thereof from a ground background;
step 2, taking the manufactured irradiation map and shadow map as true values and sending the true values and the shadow map into a differentiable renderer, wherein the initialization input of the renderer is a spherical grid;
step 3, in each iteration process, rendering by the current model to obtain an illumination map and a shadow map, and comparing with a true value to obtain a formula (21) L comb . According to the formula (22), calculating to obtain the gradient of the error on the coordinates transformed by the current input grid, transmitting the gradient to the grid coordinates through a backward propagation algorithm, and adjusting;
step 4, repeating the step 3, iterating 200 epochs, finding that the error between the rendered illumination map and shadow and the true value is small, and inputting the mesh of the renderer to reconstruct the obtained target;
step 5: and selecting three specific view angles by using the T72 target obtained by reconstruction, inputting the three specific view angles into a renderer, and generating SAR images under the corresponding view angles.
FIG. 12 is a comparison of SAR images rendered using the reconstructed T72 model with true values in an embodiment of the present invention.
As shown in fig. 12, in order to better compare the generated SAR image with the true value, a ground background corresponding to the true value of the SAR image is attached to the generated target SAR image, and parameters of the speckle distribution are extracted from shadows of the SAR real image, so that a similar speckle base noise is generated, and is superimposed on the whole rendering chart. According to comparison of the generated target SAR image and the SAR image truth value, the SAR image rendered by the T72 model obtained through reconstruction is basically consistent with the SAR image truth value, and the fact that the three-dimensional target is accurately reconstructed is indicated.
In this embodiment, the SAR image size is 128×128 pixels, and the software and hardware configuration of the experiment is GeForce RTX 2080Ti, pytorch. The time required for the target reconstruction is related to the number of bins initially entered, the number of active pixels in the image, the number of images used for inversion.
Effects and effects of the examples
According to a target reconstruction method based on a differentiable SAR image renderer, expressions of a contour map, an illumination map, and a shadow are proposed and implemented in the differentiable renderer. In the differential renderer, a continuous function between the two-dimensional image and the three-dimensional scene element is established through probability approximation of a forward process, errors between the rendered image and a true value are transferred to input scene parameters along the reverse direction of a forward rendering pipeline, and the three-dimensional scene conforming to the two-dimensional image can be reconstructed through modifying the values of the parameters for a plurality of times through a gradient descent algorithm. The target reconstruction method based on the differentiable SAR image renderer can be suitable for SAR images of any platform, has high robustness and good compatibility, and has popularization and application prospects.
The above embodiments are preferred examples of the present invention, and are not intended to limit the scope of the present invention.

Claims (6)

1. The target reconstruction method based on the differential SAR image rendering is used for reconstructing a three-dimensional target through a differential renderer according to the SAR image and is characterized by comprising the following steps of:
step S1, defining a three-dimensional rendering scene of the differentiable renderer;
step S2, reconstructing the object,
in step S2, reconstructing the non-ground target by using a contour map, reconstructing the ground target by using an illumination map and a shadow, and reconstructing the ground target by using the illumination map and the shadow includes: an illumination pattern-based target reconstruction method, a shadow-based target reconstruction method, and an illumination pattern and shadow-based target reconstruction method,
the reconstruction process in step S2 includes:
s2-1, extracting the outline map or the illumination map and the shadow from the SAR image as true values according to different types of targets, and sending the true values to the differentiable renderer as a rendering target;
s2-2, initializing input of the differentiable renderer is a spherical grid, errors between the rendered image of the differentiable renderer and the true value are transmitted to the input scene parameters along the reverse direction of the forward rendering pipeline of the differentiable renderer, the numerical values of the scene parameters are modified for multiple times through a gradient descent algorithm, and a three-dimensional scene conforming to the target two-dimensional image is reconstructed.
2. The differentiable SAR image renderer-based sample generation method of claim 1, wherein:
wherein step S1 comprises the sub-steps of:
step A1, establishing a world coordinate system O-XYZ, and placing a target to be imaged at an origin O;
step A2, setting a radar nominal position O ', setting the radar moving direction as O' X ', setting the radar irradiation direction as O' Z ', determining the direction of a third axis O' Y 'through O' X 'and O' Z ', establishing a radar coordinate system O' -X 'Y' Z 'centering on the radar, defining a plane O' Z 'X' as a mapping plane according to a projection mapping algorithm, and expressing a (k, l) th mapping unit as m (k,l) The plane O ' X ' Y ' is defined as the projection plane, and the (i, l) th element is denoted as p (i,l)
Step A3, selecting a grid as a representation mode of the imaging target, wherein the representation mode comprises a vertex setAnd a triangle face element set ++>I.e. there is N v Vertex sum N f Individual face elements->Representing the spatial coordinates of the ith vertex, +.>Representation ofIndex of three vertices belonging to the jth triangular bin, each triangular bin f j With a texture attribute S j And S is combined with j Set as a scalar;
step A4, collecting the vertexes of the grid targetsTransformation to vertex { v } under radar coordinate system i Then bin f j Vertex coordinates M of (2) j The definition formula of (2) is as follows:
in the formula (1), (x) j,n ,y j,n ,z j,n ) Is f j X, y, z coordinates of the nth vertex of (c).
3. The differentiable SAR image renderer-based sample generation method of claim 1, wherein:
the reconstructing with the contour map in step S2 includes the following sub-steps:
step B1, transforming the inclined distance under the radar coordinate system to obtain a bin f j Comprising a mapping unit m (k,l) The likelihood of (1) is defined as probabilityEuclidean distance d (m) (k,l) ,f j ) The calculation formula is as follows:
in the formula (2),for one sign indicating bit, representing the mapping unit m (k,l) At f j Inside of (2)Whether it is an external one or not,sigma is a scalar that controls the sharpness of the probability distribution, when sigma-0, the probability map converges to the exact shape of the bin boundary;
step B2, the contour map is a two-dimensional projection of the target on the mapping plane, and the value of each resolution unit in the contour mapThe definition formula of (2) is as follows:
in the formula (3), N f Representing the number of bins in the mesh target;
step B3, considerFor connecting profiles->And the coordinates M j Intermediate variable of>For M j The derivative of (c) is defined as follows:
step B4, defining a mixed loss function for supervising the geometric reconstruction process, wherein the mixed loss function not only measures the error between the rendered image and the true value, but also constrains the smoothness of the surface element, and the formula is as follows:
L=L sil1 L lap2 L flat (5)
in the formula (5), lambda 1 =0.03,λ 2 =0.0003, the weight decreases with decreasing importance, L sil Contour map I for rendering sil Sum profile truth valueThe negative cross-over ratio between the two represents the difference between the rendering profile and the truth profile, and the formula is as follows:
in the formula (6), the term "+. sil Andthe higher the overlap of L sil The smaller the value of (c) is,
in the formula (5), L lap For estimating the distance between adjacent nodes, L is the sum of squares of coordinates in the Laplace transform domain lap The smaller the value, the more compact in the mesh bin space, L lap The definition formula of (2) is as follows:
in the formula (7) of the present invention,is the coordinate of the grid vertex set V after Laplacian transformation, < >>Is->Coordinates of an nth axis of the ith vertex of (a);
in the formula (5),L flat For the sum of squares of cosine values of included angles of adjacent surface elements, the definition formula is as follows:
in equation (8), E is the set of all edges in the deformed mesh, θ i Is the included angle between two triangular surface elements sharing the ith edge;
step B5, after each iteration, obtaining that the correction of the target according to the current rendered image is gradientOpposite surface element f j For its coordinate matrix M according to the transferred gradient j The formula is as follows:
in the formula (9), the left side of the formula is the coordinates after adjustment, the right side is the coordinates before adjustment, and μ represents the learning rate when the gradient is decreasing.
4. The differentiable SAR image renderer-based sample generation method of claim 1, wherein:
the target reconstruction method based on the irradiation map in the step S2 comprises the following substeps:
step C1, in the rendered image rendered by the differential renderer, each resolution unit m (k,l) Cumulative scatter value atThe definition formula of (2) is as follows:
step C2, the illumination pattern is defined as the radar visible region, i.eCorresponding region, pair mapping unit m (k,l) Bin f j Contribution to it->Is defined as:
the definitions of the two components in equation (11) are as follows:
in the formula (12) of the present invention,representing the (i, l) th projection unit p (i,l) Assigned to f j Energy of->Representation->For nearby m (k,l) Is proportional to p (i,l) Reflection point and m on mapping plane (k,l) The distance between the two is related to each other,
for m (k,l) At all N f In each bin, there is at least one faceProbability of element being illuminatedThe formula is as follows:
step C3, the bias guide of the illumination map to the element grid is defined as:
step C4, defining a hybrid loss function for supervising the geometrical reconstruction process, the hybrid loss function measuring the error between the rendered image of the illumination map and the true value, the formula being as follows:
L=L ill1 L lap2 L flat (16)
in the formula (16), L ill Is a rendered illumination map I ill And illumination pattern truth valueThe negative cross-over ratio is used for measuring the difference between the rendered image and the true value, and the definition formula is as follows:
step C5, the rendered imageAmplitude and intensity->The probability is positive correlation, and the target image of the SAR image is producedThe illumination map is used as the true value, the manufactured illumination map is used as the rendering target of the differentiable renderer, and the geometric coordinate M of each bin is adjusted in a plurality of iterations j And obtaining the three-dimensional target after completing convergence.
5. The differentiable SAR image renderer-based sample generation method of claim 1, wherein:
the shadow-based target reconstruction method in step S2 includes the following sub-steps:
step D1, the surface element f j After projection and skew transformation to the mapping plane, it is defined with the mapping unit m (k,l) The probability of intersection in the mapping plane isThe ground clearance profile is defined as:
the upper boundary of the shadow is also the illumination area I ill Is a boundary dividing a bright area and a dark area, and the shadow is defined as the furthest boundary projected on the ground depending on the target along the radar beam irradiation direction:
I sha =I gsil -I gsil ⊙I ill (19)
step D2, for each cell of the shadow region, the gradient calculation for each bin is independent of each other,opposite surface element f j The derivatives of the coordinate matrix of (a) are:
step D3, defining a mixtureA loss function for supervising the geometrical reconstruction process, wherein L sha Is a rendered shadow map I sha True value of sum shadow mapNegative cross-over ratio between the two;
and D4, segmenting the SAR image from multiple views to obtain shadows of the corresponding targets, taking segmentation results as true values, taking segmentation results as rendering targets of the differentiable renderer, inverting a grid model corresponding to the shadows, completing three-dimensional reconstruction, and quantitatively evaluating reconstruction results after the reconstruction is completed.
6. The differentiable SAR image renderer-based sample generation method of claim 1, wherein:
the target reconstruction method based on the irradiation pattern and the shadow in the step S2 comprises the following substeps:
step E1, consider rendering illumination pattern I ill And rendering shadow map I shd True value of irradiation patternAnd shadow map true value->Mutually exclusive, defining a mixed loss function L for fusing the illumination pattern and the shadow comb For replacing L ill Or L sha The formula is as follows:
L comb opposite surface element f j Coordinate matrix M of (2) j The bias leads of (2) are:
e2, segmenting the shadow of the corresponding target from the SAR image with multiple views, linearly scaling the SAR image to obtain the irradiation map of the target, taking the shadow and the irradiation map which are manufactured by the SAR image as true values, and inverting the shadow and the irradiation map as rendering targets of the differentiable renderer to obtain a corresponding grid model, thereby completing three-dimensional reconstruction.
CN202210820257.6A 2022-07-13 2022-07-13 Target reconstruction method based on differentiable SAR image renderer Pending CN117437347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210820257.6A CN117437347A (en) 2022-07-13 2022-07-13 Target reconstruction method based on differentiable SAR image renderer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210820257.6A CN117437347A (en) 2022-07-13 2022-07-13 Target reconstruction method based on differentiable SAR image renderer

Publications (1)

Publication Number Publication Date
CN117437347A true CN117437347A (en) 2024-01-23

Family

ID=89554024

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210820257.6A Pending CN117437347A (en) 2022-07-13 2022-07-13 Target reconstruction method based on differentiable SAR image renderer

Country Status (1)

Country Link
CN (1) CN117437347A (en)

Similar Documents

Publication Publication Date Title
CN113178009B (en) Indoor three-dimensional reconstruction method utilizing point cloud segmentation and grid repair
CN108038906B (en) Three-dimensional quadrilateral mesh model reconstruction method based on image
Bellocchio et al. 3D surface reconstruction: multi-scale hierarchical approaches
CN106846392A (en) The method and apparatus of three-dimensional modeling
CN115761178A (en) Multi-view three-dimensional reconstruction method based on implicit neural representation
CN116543117B (en) High-precision large-scene three-dimensional modeling method for unmanned aerial vehicle images
CN116797742A (en) Three-dimensional reconstruction method and system for indoor scene
Fu et al. Differentiable SAR renderer and image-based target reconstruction
CN115147709B (en) Underwater target three-dimensional reconstruction method based on deep learning
CN113393577B (en) Oblique photography terrain reconstruction method
CN116822100B (en) Digital twin modeling method and simulation test system thereof
CN108171790B (en) A kind of Object reconstruction method dictionary-based learning
CN117274515A (en) Visual SLAM method and system based on ORB and NeRF mapping
Peng et al. 3D object reconstruction and representation using neural networks
CN112950786A (en) Vehicle three-dimensional reconstruction method based on neural network
Jarvis et al. 3D shape reconstruction of small bodies from sparse features
CN112686916A (en) Curved surface reconstruction system based on heterogeneous multi-region CT scanning data processing
Owaki et al. Hybrid physics-based and data-driven approach to estimate the radar cross-section of vehicles
CN116310228A (en) Surface reconstruction and new view synthesis method for remote sensing scene
Fu et al. Extension of differentiable SAR renderer for ground target reconstruction from multi-view images and shadows
CN117437347A (en) Target reconstruction method based on differentiable SAR image renderer
CN113436235B (en) Laser radar and visual point cloud initialization automatic registration method
Lei et al. SAR-NeRF: Neural radiance fields for synthetic aperture radar multi-view representation
CN113066161B (en) Modeling method of urban radio wave propagation model
CN216692729U (en) Indoor surveying and mapping trolley

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination