CN112835039A - Method and device for planar aperture regional nonlinear progressive phase iterative imaging - Google Patents
Method and device for planar aperture regional nonlinear progressive phase iterative imaging Download PDFInfo
- Publication number
- CN112835039A CN112835039A CN202011609241.8A CN202011609241A CN112835039A CN 112835039 A CN112835039 A CN 112835039A CN 202011609241 A CN202011609241 A CN 202011609241A CN 112835039 A CN112835039 A CN 112835039A
- Authority
- CN
- China
- Prior art keywords
- distance
- point
- radar
- compensation factor
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
Landscapes
- Engineering & Computer Science (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Computer Networks & Wireless Communication (AREA)
- General Physics & Mathematics (AREA)
- Radar Systems Or Details Thereof (AREA)
Abstract
The disclosure relates to a method and a device for planar aperture regional nonlinear progressive phase iterative imaging, wherein the method comprises the following steps: moving the planar aperture three-dimensional imaging radar along the elevation direction to form a synthetic aperture or real aperture radar with sampling points located on the same plane; performing distance error analysis based on the calculation of the stepping distance compensation factor; based on the area division of the sampling points of the planar aperture radar, the distance process from any point of the observation area to the sampling points is obtained through iterative calculation of the stepping distance compensation factors, so that the three-dimensional reconstruction of the observation area is completed. The device comprises: an error analysis module and an imaging module. According to the embodiments of the disclosure, a global point-by-point superposition imaging mode can be replaced by constructing the regional nonlinear progressive Doppler compensation factor, and the three-dimensional imaging efficiency is greatly improved.
Description
Technical Field
The disclosure relates to the field of radar three-dimensional imaging, in particular to a method and a device for planar aperture regional nonlinear progressive phase iterative imaging.
Background
Compared with other traditional micro-variation monitoring means such as a GPS (global positioning system) and the like, the micro-variation monitoring radar has the advantages of all weather, large range, high precision and the like, and is widely applied to micro-deformation detection of high and steep slopes, bridge buildings and the like. The three-dimensional resolution imaging and three-dimensional deformation information extraction of the observation area can be realized through the three-dimensional imaging of the micro-variation monitoring radar, the phenomena of overlapping, top-bottom inversion and the like caused by observation geometry can be effectively inhibited, and the method has wide application prospects in the aspects of slope landslide and artificial building monitoring.
Because the observation area is large, the existing imaging algorithm is difficult to carry out three-dimensional reconstruction of a large scene area; the three-dimensional back projection algorithm can carry out three-dimensional reconstruction on the region of interest by virtue of point-by-point reconstruction, has certain advantages, but because the distance process is calculated point-by-point, the calculated amount is high, the imaging efficiency is influenced, and the real-time monitoring requirement is difficult to meet. In summary, a fast and accurate reconstruction of a large field of view region cannot be achieved.
Disclosure of Invention
The disclosure intends to provide a method and a device for planar aperture regional nonlinear progressive phase iterative imaging, which greatly improve the three-dimensional imaging efficiency by constructing regional nonlinear progressive Doppler compensation factors and replacing a global point-by-point superposition imaging mode.
According to one aspect of the present disclosure, there is provided a method for non-linear progressive phase iterative imaging in a planar aperture partition area, including:
moving the planar aperture three-dimensional imaging radar along the elevation direction to form a synthetic aperture or real aperture radar with sampling points located on the same plane;
performing distance error analysis based on the calculation of the stepping distance compensation factor;
based on the area division of the sampling points of the planar aperture radar, the distance process from any point of the observation area to the sampling points is obtained through iterative calculation of the stepping distance compensation factors, so that the three-dimensional reconstruction of the observation area is completed.
In some embodiments, wherein said performing a distance error analysis based on a step-wise distance compensation factor calculation comprises:
obtaining an azimuth distance compensation factor;
obtaining an elevation direction distance compensation factor;
and acquiring smaller azimuth stepping distance and elevation stepping distance according to the azimuth distance compensation factor and the elevation distance compensation factor so as to reduce distance errors.
In some embodiments, the first and second light sources, wherein,
the obtaining of the azimuth distance compensation factor includes:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
obtaining an azimuth distance compensation factor based on the Meglanlin approximate distance;
the obtaining of the elevation distance compensation factor includes:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
and obtaining an elevation direction distance compensation factor based on the Meglanlin approximate distance.
In some embodiments, wherein performing the distance error analysis comprises:
combining the azimuth distance compensation factor and the elevation distance compensation factor, and based on the distance from any point in the observation area to the radar initial sampling point, step-by-step solving the distance process from any point in the observation area to the radar sampling point;
and step-by-step solving a distance error generated by a distance process from any point in the observation area to the radar sampling point.
In some embodiments, the method of the planar aperture zoned nonlinear progressive phase iterative imaging includes:
compressing echo signals at each sampling point of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, wherein each pixel area is provided with a plurality of pixels;
calculating the phase of each pixel point in the imaging area by adopting a regional stepping phase iteration method;
calculating the size of a sampling point area according to the stepping distance solving error, and dividing the plane synthetic aperture into a plurality of sampling point areas;
and calculating the value of each pixel point in the observation area point by point.
In some embodiments, the dividing the imaging area into a number of pixel areas, each pixel area having a number of pixels, includes:
setting parameters, including the size of each pixel and the coordinate of the selected initial pixel point;
calculating a distance compensation factor coefficient;
and calculating the size of the pixel area and the number of the pixel areas.
In some embodiments, the calculating the phase of each pixel point in the imaging region by using a partition stepping phase iteration method includes:
calculating an initial phase and a phase compensation factor based on the number of pixel point regions, the number of pixels contained in each pixel point region, the radar center frequency, a distance compensation factor coefficient and an initial value;
and (3) iteratively calculating the phase of the pixel point along the azimuth direction, including repeatedly performing: distance direction iteration, elevation direction iteration and pixel point area iteration.
In some embodiments, the calculating the size of the sampling point area according to the step-wise distance solution error and dividing the planar synthetic aperture into a plurality of sampling point areas includes:
selecting a pixel point of an observation area and a radar initial sampling point;
calculating the distance process from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient and an elevation distance compensation factor coefficient;
solving a Meglanlin approximation of the distance history;
solving an actual distance process;
and calculating the size of the sampling point area and the number of the divided areas, and calculating the sampling point area according to the phase error limiting condition.
In some embodiments, wherein calculating the value of each pixel point of the observation region point by point comprises:
and performing point-by-point reconstruction on each pixel point of the observation area based on an algorithm to realize three-dimensional resolution imaging of the observation area.
According to one of the schemes of the disclosure, a device for nonlinear progressive phase iterative imaging of a planar aperture in different areas is provided, and a synthetic aperture or real aperture radar configuration with sampling points located on the same plane is formed on the basis that a planar aperture three-dimensional imaging radar moves along the elevation direction; the device comprises:
an error analysis module configured for performing a distance error analysis based on the stepped distance compensation factor calculation;
and the imaging module is configured for dividing the area of the sampling point based on the planar aperture radar, so that the distance history from any point in the observation area to the sampling point is obtained through iterative calculation of a stepping distance compensation factor, and the three-dimensional reconstruction of the observation area is completed.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
the method for the nonlinear progressive phase iterative imaging in the planar aperture subareas is disclosed.
According to the method and the device for the non-linear progressive phase iterative imaging of the planar aperture in the subareas of various embodiments, at least a planar aperture three-dimensional imaging radar moves along the elevation direction to form a synthetic aperture or a real aperture radar with sampling points on the same plane; performing distance error analysis based on the calculation of the stepping distance compensation factor; based on the area division of the sampling points of the planar aperture radar, the distance process from any point of an observation area to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to complete three-dimensional reconstruction of the observation area, so that the sampling points of the planar aperture radar can be divided into areas, and in each area, the distance from the pixel points of the observation area to the sampling points is calculated in a stepping iteration mode, so that root index operation during distance calculation is reduced, and the algorithm efficiency is improved; the specific method steps for calculating the distance compensation factors are deduced in the embodiments of the disclosure, meanwhile, the distance error caused by the distance process of stepwise distance iterative solution is analyzed, and the maximum range of sampling point area division is calculated according to the phase error condition; the method realizes area division of the observation area, solves the phase of the pixel point of the observation area by adopting a stepping phase iteration method, solves the defect of the phase of the observation area by adopting a phase compensation factor instead of point by point, reduces root index operation during phase solution, and further improves algorithm efficiency.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.
Drawings
In the drawings, which are not necessarily drawn to scale, like reference numerals may designate like components in different views. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of like components. The drawings illustrate various embodiments generally, by way of example and not by way of limitation, and together with the description and claims, serve to explain the disclosed embodiments.
Fig. 1 is a schematic diagram of a planar aperture micro-variation monitoring radar according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a step-wise distance solution according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram illustrating an azimuthal distance compensation factor solution according to an embodiment of the present disclosure;
FIG. 4 is a schematic illustration of an elevation distance compensation factor solution according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of an error analysis according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of an observation region pixel point division according to an embodiment of the disclosure;
FIG. 7 is a diagram of a phase matrix PHI of a pixel point in an observation area according to an embodiment of the present disclosure3D;
Fig. 8 is a schematic diagram of three-dimensional reconstruction of a pixel point in an observation region according to an embodiment of the present disclosure;
FIG. 9 is a flow chart of a step-wise distance calculation according to an embodiment of the present disclosure;
FIG. 10 is an observation region pixel point division according to an embodiment of the disclosure;
fig. 11 is a flowchart illustrating a phase calculation process of a pixel point in an observation area according to an embodiment of the disclosure;
fig. 12 is a flowchart illustrating a sampling point area division according to an embodiment of the disclosure;
fig. 13 is a flowchart of a three-dimensional reconstruction of an observation region according to an embodiment of the present disclosure;
fig. 14 is a schematic flow chart diagram of an imaging method according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described below clearly and completely with reference to the accompanying drawings of the embodiments of the present disclosure. It is to be understood that the described embodiments are only a few embodiments of the present disclosure, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the described embodiments of the disclosure without any inventive step, are within the scope of protection of the disclosure.
Unless otherwise defined, technical or scientific terms used herein shall have the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that the element or item listed before the word covers the element or item listed after the word and its equivalents, but does not exclude other elements or items.
To maintain the following description of the embodiments of the present disclosure clear and concise, a detailed description of known functions and known components have been omitted from the present disclosure.
As shown in fig. 1, the planar aperture three-dimensional imaging radar is moved along an elevation direction E by, but not limited to, a horizontally placed linear array to form a synthetic aperture radar or a real aperture radar with sampling points located on the same plane. The radar can realize three-dimensional resolution imaging of the observation area. In FIG. 1, the radar sample point PRadar(x,0, z) lies in the XOZ plane, x representing the radar sampling point PRadarThe abscissa of the (c) axis of the (c),zrepresenting radar sample points PRadarVertical coordinates of (a); l isSynAIndicates the radar azimuth (A) the synthetic aperture length, LSynERepresenting a radar elevation direction (E) synthetic aperture length; Δ x represents a radar azimuth (a) sampling interval, and Δ z represents a radar elevation (E) sampling interval. Note that the radar azimuth direction (a) is a positive direction of the X axis of the coordinate system, and the radar elevation direction (E) is a positive direction of the Z axis of the coordinate system.
As one aspect, an embodiment of the present disclosure provides a method for non-linear progressive phase iterative imaging in a planar aperture partition area, including:
moving the planar aperture three-dimensional imaging radar along the elevation direction to form a synthetic aperture or real aperture radar with sampling points located on the same plane;
performing distance error analysis based on the calculation of the stepping distance compensation factor;
based on the area division of the sampling points of the planar aperture radar, the distance process from any point of the observation area to the sampling points is obtained through iterative calculation of the stepping distance compensation factors, so that the three-dimensional reconstruction of the observation area is completed.
One of the disclosed inventive concepts aims to realize area division of the sampling point of the planar synthetic aperture radar, and in each area, the distance from the pixel point of the observation area to the sampling point is calculated by adopting a stepping iteration mode, so that root index operation during distance calculation is reduced, and algorithm efficiency is improved.
The disclosure mainly lies in two aspects, namely, calculation of a stepping distance compensation factor and analysis of a distance error, and a plane aperture regional nonlinear progressive phase iterative imaging method executed on the basis of the calculation and the analysis. The implementation subject of the present disclosure is not limited as long as it is sufficient to obtain a device, an apparatus, or a specific imaging system capable of obtaining planar aperture zoned nonlinear progressive phase iterative imaging.
In some embodiments, the step-wise distance compensation factor calculation-based distance error analysis of the present disclosure includes:
obtaining an azimuth distance compensation factor;
obtaining an elevation direction distance compensation factor;
and acquiring smaller azimuth stepping distance and elevation stepping distance according to the azimuth distance compensation factor and the elevation distance compensation factor so as to reduce distance errors.
Specifically, firstly, distance compensation factors Δ Azi and Δ Ele in the azimuth direction and the elevation direction are calculated; the error E of the step-wise distance calculation is then analyzedrAnd providing a method for calculating the maximum value of the error of the sampling point area.
As shown in FIG. 2, Pn(xn,yn,zn) To observe any point of the region, PRadar(x,0, z) is a sampling point of the radar; x is the number ofnIs a point P of observation regionnAbscissa of (a), ynIs a point P of observation regionnOrdinate of (a), znIs a point P of observation regionnVertical coordinates of (a); x is radar sampling point PRadarOn the abscissa and z is the sampling point PRadarVertical coordinates of (a).
Radar from x ═ x0、z=z0I.e. the initial sampling point PRadar-Start(x0,0,z0) Sampling positions at intervals along the azimuth direction and the elevation direction respectively at the beginning, wherein the sampling intervals in the azimuth direction and the elevation direction are respectively delta x and delta z; coordinate P of radar sampling pointRadarX, z in (x,0, z) can be represented as:
x=x0+nx·Δx (1)
z=z0+nz·Δz (2)
wherein n isxRepresenting radar sample points PRadarRelative radar start sampling point PRadar-StartNumber of steps in azimuth,nx=0,1,…,(Nx-1);nzRepresenting radar sample points PRadarRelative radar start sampling point PRadar-StartNumber of steps in elevation, nz=0,1,…,(Nz-1);NxNumber of sampling points representing radar azimuth, NzExpressing the number of radar elevation direction sampling points; arbitrary radar sampling point PRadarIs shown as
Solving any point P of observation area in stepping modenTo radar sampling pointRespectively calculating distance compensation factors delta Azi and delta Ele in the azimuth direction and the elevation direction; and then the distance process is solved step by the distance compensation factor.
In some embodiments, the deriving an azimuthal distance compensation factor of the present disclosure comprises:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
obtaining an azimuth distance compensation factor based on the Meglanlin approximate distance;
the obtaining of the elevation distance compensation factor includes:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
and obtaining an elevation direction distance compensation factor based on the Meglanlin approximate distance.
With respect to the azimuth distance compensation factor Δ Azi:
as shown in FIG. 3, any point P in the observation arean(xn,yn,zn) To radar start sampling point PRadar-Start(x0,0,z0) The distance of (a) is:
radar sampling point when calculating azimuth distance compensation factorRelative radar start sampling point PRadar-StartThe number of steps in the azimuth direction is nxThe number of steps in the elevation direction is 0; coordinates of sampling points are expressed asWherein n isx=0,1,…,(Nx-1). Any point P in observation arean(xn,yn,zn) To radar sampling pointThe distance of (a) is:
wherein x isnIs a point P of observation regionnAbscissa of (a), ynIs a point P of observation regionnOrdinate of (a), znIs a point P of observation regionnVertical coordinates of (a); x is the number of0For radar starting sample point PRadar-StartAbscissa of (a), z0For radar starting sample point PRadar-StartVertical coordinates of (a); n isxFor sampling points for radarRelative start sampling point PRadar-StartThe number of steps in the azimuth direction; Δ x is the radar azimuth sampling interval.
Substituting formula (3) for formula (4) to obtain:
the formula (5) is in nxThe tax is taylor expansion at 0, and the expression of Maclaurin (Maclaurin) is obtained as follows:
wherein o (, is called Peano (Peano) remainder.
The peano term o (—) in equation (6) is ignored, and the distance calculation error caused by ignoring the peano term is analyzed.
Neglecting the Pionooen term o (.)n(xn,yn,zn) To radar sampling pointThe approximate distance of maculing is:
wherein the content of the first and second substances,
the formula (7) to nxAnd (3) obtaining an azimuth distance compensation factor by derivation:
ΔAzi(nx)=2Ax·Δx2·nx+Bx·Δx (10)
wherein x isnIs a point P of observation regionnAbscissa of (a), ynIs a point P of observation regionnOrdinate of (a), znIs a point P of observation regionnVertical coordinates of (a); x is the number of0For radar starting sample point PRadar-StartAbscissa of (a), z0For radar starting sample point PRadar-StartVertical coordinates of (a); n isxFor sampling points for radarThe number of steps in the azimuth direction; Δ x is the radar azimuth sampling interval.For any point P in the observation arean(xn,yn,zn) To radar sampling point(ii) a cramlein approximation distance;for any point P in the observation arean(xn,yn,zn) To radar start sampling point PRadar-Start(x0,0,z0) The distance of (d); a. thex、BxThe coefficients of the factors are compensated for the azimuth distance.
Compensation factor Δ Ele for elevation distance
As shown in FIG. 4, an arbitrary point P of the observation regionn(xn,yn,zn) To radar start sampling point PRadar-Start(x0,0,z0) The distance of (a) is:
when calculating the compensation factor of the elevation distance, the radar samplesRelative radar start sampling point PRadar-StartStep number of 0 in azimuth direction and heightThe number of steps is nz(ii) a Coordinates of sampling points are expressed asWherein n isz=0,1,…,(Nz-1). Observation area arbitrary point Pn(xn,yn,zn) To radar sampling pointThe distance of (a) is:
wherein x isnIs a point P of observation regionnAbscissa of (a), ynIs a point P of observation regionnOrdinate of (a), znIs a point P of observation regionnVertical coordinates of (a); x is the number of0For radar starting sample point PRadar-StartAbscissa of (a), z0For radar starting sample point PRadar-StartVertical coordinates of (a); n iszFor sampling points for radarRelative start sampling point PRadar-StartNumber of steps in elevation; Δ z is the radar elevation sampling interval.
Substituting formula (11) for formula (12) to obtain:
formula (13) is at nzThe taz is a taylor expansion at 0, which is given by the Maclaurin formula:
wherein o (, is called Peano (Peano) remainder.
The peano term o (—) in equation (14) is ignored, and the distance calculation error caused by ignoring the peano term is analyzed.
Neglecting the Pionooen term o (;) in the formula (14), an arbitrary point P of the observation region can be obtainedn(xn,yn,zn) To the sampling pointThe approximate distance of maculing is:
wherein the content of the first and second substances,
the formula (15) to nzAnd (3) obtaining a height direction distance compensation factor by derivation:
ΔEle(nz)=2Az·Δz2·nz+Bz·Δz (18)
wherein x isnIs a point P of observation regionnAbscissa of (a), ynIs a point P of observation regionnOrdinate of (a), znIs a point P of observation regionnVertical coordinates of (a); x is the number of0For radar starting sample point PRadar-StartAbscissa of (a), z0For radar starting sample point PRadar-StartVertical coordinates of (a); n iszFor sampling points for radarRelative start sampling point PRadar-StartNumber of steps in elevation; Δ z is the radar elevation sampling interval.For any point P in the observation arean(xn,yn,zn) To radar sampling point(ii) a cramlein approximation distance;for any point P in the observation arean(xn,yn,zn) To radar start sampling point PRadar-Start(x0,0,z0) The distance of (d); a. thez、BzAnd compensating factor coefficients for the elevation distance.
The flow chart of the algorithm for calculating the distance process step by adopting the azimuth and elevation distance compensation factors is shown in FIG. 9, and any point P in the observation arean(xn,yn,zn) Radar initial sampling point PRadar-Start(x0,0,z0) The sampling intervals delta x and delta z of the azimuth direction and the elevation direction of the radar are respectively N, and the number of the azimuth direction and the elevation direction of the area sampling points is NSubAzi、NSubEle。
In some implementations, performing range error analysis of the present disclosure includes:
combining the azimuth distance compensation factor and the elevation distance compensation factor, and based on the distance from any point in the observation area to the radar initial sampling point, step-by-step solving the distance process from any point in the observation area to the radar sampling point;
and step-by-step solving a distance error generated by a distance process from any point in the observation area to the radar sampling point.
Specifically, in the step-wise distance calculation process, the distance compensation factor Δ Azi (n)x)、ΔEle(nz) Derived from equations (7) and (15) after neglecting the Peano residue term o (, i.e., preserving the finite term), so that the azimuth step distance (n)xΔ x) and elevation step distance (n)zΔ z) is smaller, the smaller the distance error is obtained.
Next, as shown in equations (7) and (15), the distance error of the step-wise distance calculation is analyzed after neglecting the peano-remainder term o (#).
As shown in FIG. 5, Pn(xn,yn,zn) To observe any point of the region, PRadar-Start(x0,0,z0) In order to start the sampling point of the radar,stepping n along azimuth direction and elevation direction for radar respectivelyx、nzThe next sample point.
Observation area arbitrary point Pn(xn,yn,zn) To radar start sampling point PRadar-Start(x0,0,z0) Is expressed asObservation area arbitrary point Pn(xn,yn,zn) To radar sampling pointIs expressed asAnd step-by-step solving any point P of observation arean(xn,yn,zn) To radar sampling pointThe distance history of (a) is:
wherein A isx、BxFor the azimuthal distance compensation factor coefficient, Az、BzCompensating factor coefficients for the elevation distance; n isx、nzRespectively radar sampling pointRelative start sampling point PRadar-StartThe stepping times along the azimuth direction and the elevation direction; and delta x and delta z are sampling intervals of the azimuth direction and the elevation direction of the radar respectively.
Step-by-step solving for any point P in observation arean(xn,yn,zn) To radar sampling pointThe distance error Er generated by the distance history of (a) is expressed as:
wherein the content of the first and second substances,for any point P in the observation areanTo radar start sampling point PRadar-StartDistance history of (d);for any point P in the observation areanTo the sampling pointActual distance history of; n isx、nzRespectively radar sampling pointRelative start sampling point PRadar-StartThe stepping times along the azimuth direction and the elevation direction; and Δ x and Δ z are sampling intervals of the azimuth direction and the elevation direction of the radar respectively.
From the geometric relationship:
wherein the content of the first and second substances,for any point P in the observation areanTo radarInitial sampling point PRadar-StartAnd sampling pointDistance travel difference of (2);for radar starting sample point PRadar-StartTo the sampling pointThe distance of (c).
In summary, when observing the region point PnAt the radar start sampling point PRadar-StartAnd sampling pointWhen the distance is extended, the error Er of the stepping solving distance process is the maximum.
In some embodiments, referring to fig. 14, the planar aperture zoned nonlinear progressive phase iterative method of the present disclosure includes:
compressing echo signals at each sampling point of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, wherein each pixel area is provided with a plurality of pixels;
calculating the phase of each pixel point in the imaging area by adopting a regional stepping phase iteration method;
calculating the size of a sampling point area according to the stepping distance solving error, and dividing the plane synthetic aperture into a plurality of sampling point areas;
and calculating the value of each pixel point in the observation area point by point.
According to the embodiments of the present disclosure, area division is implemented on the sampling point of the planar aperture radar, so that the distance history from any point in the observation area to the sampling point can be solved iteratively through a step compensation factor, thereby reducing root exponential operation.
As shown in fig. 2, the radar is driven from x ═ x0、z=z0I.e. point PRadar-Start(x0,0,z0) Starting at a position along the squareSampling in the azimuth direction and the elevation direction at intervals, wherein the sampling intervals in the azimuth direction and the elevation direction are respectively delta x and delta z; sampling pointStepping the radar by n along azimuth direction and elevation directionx、nzThe next sample point. Radar at sampling pointThe echo equation is expressed as:
f={fmin,…,fmax} (23)
where f is the step frequency of the radar echo, fminMinimum frequency for radar stepping, fmaxThe highest frequency of the radar step, C the speed of light,for sampling points for radarDistance to pixel points of the imaging region.
Specifically, the imaging method of the present embodiment includes:
step S1: distance compression, in which the echo signals at each sampling point of the radar are compressed along the distance direction, an inverse Fourier transform (IFFT) method can be adopted, and the distance compression signals are
Wherein f iscFor the center frequency of the radar echo, (f) may be takenmin+fmax)/2;
Step S2: dividing the pixel points of the observation region into an imaging region Mx×My×MzEach pixel is divided into KPixel regions, each pixel region including M pixelsSub×MSub×MSub(ii) a In the kth pixel region, the pixel coordinates are expressed asWherein m isx=1,2,…,MSub、my=1,2,…,MSub、mz=1,2,…,MSub;Mx、My、MzRespectively represents the number of pixels in the azimuth direction, the distance direction and the elevation direction of an observation area,is a pixel pointThe abscissa of the (c) axis of the (c),is a pixel pointThe ordinate of (a) is,is a pixel pointVertical coordinates of (a);
step S3: calculating the phase of each pixel point, and calculating the phase PHI of each pixel point in the imaging area by adopting a regional stepping phase iteration method3D;
Step S4: dividing a plane sampling point area, calculating the size of the sampling point area according to a stepping distance solving error Er, and dividing the plane synthetic aperture into I sampling point areas;
step S5: and (4) three-dimensional reconstruction of the observation area, namely calculating the value of each pixel point of the observation area point by point, and completing the three-dimensional reconstruction of the observation area.
The parameter settings in the embodiments of the present disclosure are, for example: lowest frequency f of radar steppingmin77GHz, maximum frequency fmax77.25GHz, the radar bandwidth B is 250M, and the frequency point number gamma is 5001; the coordinate of the radar initial sampling point is PRadar-Start(-0.8,0, -0.8), the sampling intervals delta x and delta z of the radar along the azimuth direction and the elevation direction are both 0.008m, and the sampling points N are arranged along the azimuth direction and the elevation directionx、Nz201, the synthetic aperture length L of the radar azimuth direction and the pitching directionSynA、LSynEAre all 1.6 m; observation region pixel pointStarting abscissa x of1Is-50 m, end abscissa50m, observation area pixel pointStarting ordinate y of1950m, end ordinate1050m, observation area pixel pointStarting vertical coordinate z of1At-50 m, end vertical coordinateIs 50 m.
Regarding observation region pixel point division: the dividing of the imaging area into a number of pixel areas of the present disclosure, each pixel area having a number of pixels, includes:
setting parameters, including the size of each pixel and the coordinate of the selected initial pixel point;
calculating a distance compensation factor coefficient;
and calculating the size of the pixel area and the number of the pixel areas.
Specifically, in step S2, M in the imaging region is setx×My×MzPersonal portraitThe pixels are divided into K pixel regions, and each pixel region comprises M pixelsSub×MSub×MSubThe method flow chart is shown in fig. 10, the imaging area pixel division is shown in fig. 6, and the steps are as follows:
step S21: setting parameters, each pixel size is Deltamx×Δmy×ΔmzSelecting the coordinates of the initial pixel point as m111(x1,y1,z1);
Step S22: calculating a distance compensation factor coefficient, wherein the azimuth distance compensation factor coefficient is as follows:
the distance compensation factor coefficient is:
the elevation distance compensation factor coefficient is as follows:
step S23: calculating the size M of the pixel point regionSubAnd the number of the areas K, the progressive phase error is as follows:
let EPHILess than or equal to pi/4, obtaining MSubAnd dividing the number K of pixel point regions into:
regarding the calculation of the phase of each pixel: the method for calculating the phase of each pixel point in the imaging area by adopting the partitioned stepping phase iteration method comprises the following steps:
calculating an initial phase and a phase compensation factor based on the number of pixel point regions, the number of pixels contained in each pixel point region, the radar center frequency, a distance compensation factor coefficient and an initial value;
iterating the phase of each pixel point, including repeating: distance direction iteration, elevation direction iteration and pixel point area iteration.
Specifically, in step S3, the phase of each pixel is calculated by using a step-by-step phase iteration method, and a flowchart is shown in fig. 11. The method comprises the following specific steps:
step S31: setting parameters, wherein the number of pixel regions is K, and each pixel region contains M pixelsSub×MSub×MSubCenter frequency f of radarcCoefficient of distance compensation factor ax、bx、ay、by、az、bzInitial value mx=1、my=1、mz=1、k=1;
Step S32: calculating initial phase and phase compensation factor, wherein the initial pixel point coordinate of the kth pixel point region is mk-111(xk-1,yk-1,zk-1) Then, the phase PHI of the initial pixel point of the kth pixel point regionk-111Is composed of
The pixel point azimuth phase compensation factor is as follows:
the pixel point distance phase compensation factor is:
the pixel point elevation phase compensation factor is as follows:
step S33: iteratively calculating the phase, m, of the pixel points along the azimuth directionx=mx+1, if mx≤MSubAnd the phase of the iterative calculation azimuth pixel point is as follows:
otherwise, go to step S34;
step S34: distance direction iteration, let mx=1,my=my+1, if my≤MSubAnd calculating:
and repeating the step S33, otherwise, performing the step S35;
step S35: iteration in the elevation direction, let my=1,mz=mz+1, if mz≤MSubAnd calculating:
and repeating steps S33 and S44, otherwise, performing step S36;
step S36: iterating pixel region to obtain mzAnd if K is less than or equal to K, repeating the steps S33, S34 and S35, and if not, ending the process.
At the moment, the phase PHI of each pixel point in the three-dimensional observation area is obtained3DAs shown in fig. 7.
Regarding the plane sampling point area division: this disclosure according to marching type distance solution error calculation sampling point area size to divide plane synthetic aperture into a plurality of sampling point area, include:
selecting a pixel point of an observation area and a radar initial sampling point;
calculating the distance process from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient and an elevation distance compensation factor coefficient;
solving a Meglanlin approximation of the distance history;
solving an actual distance process;
and calculating the size of the sampling point area and the number of the divided areas, and calculating the sampling point area according to the phase error limiting condition.
From the first step-by-step distance error analysis, when observing the region point PnAt the radar start sampling point PRadar-StartAnd sampling pointWhen the distance of the plane synthetic aperture is extended, the error Er of the step-by-step solving distance process is the maximum, so the maximum sampling point area which can be divided by the plane synthetic aperture is calculated according to the phase limiting condition when the error is the maximum.
For convenient calculation, when the sampling point area is divided, starting the sampling point P from the radarRadar-Start(x0,0,z0) Starting, selecting the number N of azimuth sampling pointsSubAziNumber of sampling points in elevation direction NSubEleThe same sampling point area; and select the pixel points in the observation regionThe divisible maximum sampling point region is calculated, and the flowchart is shown in fig. 12. The method comprises the following specific steps:
step S41: selecting pixel points in observation areaAnd radar start sampling point PRadar-Start(x0,0,z0) Coordinates of radar sampling point
Step S42: calculating pixel points of observation regionTo radar start sampling point PRadar-Start(x0,0,z0) I.e. P in equation (3)n(xn,yn,zn) Is replaced byExpressed as:
wherein x is0For radar starting sample point PRadar-StartAbscissa of (a), z0For radar starting sample point PRadar-StartVertical coordinates of (a);for observation area pixel pointAbscissa of (a), y1For observation area pixel pointThe ordinate of (a) is,for observation area pixel pointVertical coordinates of (a).
Step S43: calculating the compensation factor coefficient A of the azimuth distancex、BxP in the formula (8) or (9)n(xn,yn,zn) Is replaced byCan be written as:
step S44: calculating the compensation factor coefficient A of the elevation distancez、BzP in the formula (16) or (17)n(xn,yn,zn) Is replaced byCan be written as:
wherein A isx、Bx、Az、BzAs distance compensation factor coefficient, x0For radar starting sample point PRadar-StartAbscissa of (a), z0For radar starting sample point PRadar-StartVertical seatMarking;for observation area pixel pointAbscissa of (a), y1For observation area pixel pointThe ordinate of (a) is,for observation area pixel pointVertical coordinates of (a).
Step S45: solving the Meglanlin approximation of the distance history, and calculating P in equation (19)n(xn,yn,zn) Is replaced byResolvable regional pixel pointTo radar sampling pointThe distance history of (a) is:
wherein n isx、nzRespectively radar sampling pointRelative start sampling point PRadar-StartThe stepping times along the azimuth direction and the elevation direction are respectively the sampling intervals of the radar in the azimuth direction and the elevation direction; a. thex、BxCompensating for azimuthal distanceFactor coefficient, Az、BzAnd compensating factor coefficients for the elevation distance.
Step S46: solving the actual distance history, pixel pointsTo radar sampling pointThe actual distance of (c) is:
step S47: calculating the size of the sampling point region and the number of divided regions, and calculating the sampling point region according to the phase error limit condition, i.e.
Wherein f iscBeing the center frequency of the radar echo,solving pixel points for step-by-stepTo radar sampling pointThe distance history of (a) is calculated,is a pixel pointTo radar sampling pointThe actual distance of (d); bringing formula (45) or formula (46) into formula (47) when n isx=nzThen, n is obtainedxMaximum value of (1) is NSub(ii) a Thus, the radar sampling points are divided into along azimuth direction IAzi=Nx/NSubAn area in the elevation direction IEle=Nz/NSubEach area, the number of areas is:
I=IAzi·IEle (48)
the coordinate of the starting point of the ith sampling point region is Pi-Radar-Start(xi-0,0,zi-0),i=1,2,…,I。
Regarding the three-dimensional reconstruction of the observation region: the disclosed point-by-point calculation method for the value of each pixel point in an observation area comprises the following steps:
and performing point-by-point reconstruction on each pixel point of the observation area based on an algorithm to realize three-dimensional resolution imaging of the observation area.
Specifically, in step S5, as shown in fig. 8, the algorithm performs point-by-point reconstruction on each pixel point in the observation region to realize three-dimensional resolution imaging of the observation region, and the flowchart is shown in fig. 13. The method comprises the following steps:
step S501, setting parameters, compressing signals Sr by radar echo distance, and counting M pixel points in an observation regionx×My×MzNumber of radar sampling point regions I and size of each sampling point region NSub×NSubArea of the ith sample Point Radar sample PointSampling interval delta x of radar azimuth direction, sampling interval delta z of radar elevation direction and phase PHI of pixel point3DThe number of pixels in the observation area is Q, the initial value Q is 1, i is 1, and n isx=0,nz=0;
Step S502: calculating the initial distance and distance compensation factor of the ith sampling point region and the coordinate of the qth pixel pointThe coordinate of the starting point of the ith sampling point region is Pi-Radar-00(xi-o,0,zi-0) According to formula (3)Calculating pixel point mqTo the sampling point Pi-Radar-00Initial distance ofThen, the compensation factor coefficient A is calculated according to the equations (8), (9), (16) and (17)i-x、Bi-x、Ai-z、Bi-z(ii) a Further, the distance compensation factor Δ Azi is calculated from the equations (10) and (18)i(nx)、ΔElei(nz);
Step S503: iterative computation of the qth pixel point mqDistance history to the ith sampling point region;
step S5031: iteratively calculating the distance history, n, of the azimuthal sampling pointx=nx+1, if nx<NSubCalculating the qth pixel point mqTo the sampling pointThe distance history of (a) is:
otherwise, go to step S5032;
step S5032: iterate along the elevation direction, let nx=0,nz=nz+1, if nz<NSubCalculating
And repeating step S5031; otherwise nzWhen the value is 0, executing step S504;
step S504: iterating the sampling point region, wherein the number I of the sampling point region is I +1, if I is not more than I, repeating the steps S502 to S503, otherwise, entering the step S505, and obtaining the q-th pixel point mqDistance processes to all radar sampling points;
wherein N isx、NzRespectively representing the sampling points of the radar in the azimuth direction and the elevation direction;
step S505: calculating pixel pointsThe sample points are located from the peak in the compressed signal,
where B is the radar signal bandwidth, in this example (f)max-fmin),fminTo step at the lowest frequency, fmaxTo step the highest frequency, τqIs Nz×NxA matrix of (a);
wherein f iscFor the radar center frequency, denoted (f) in this examplemax-fmin)/2,φqIs Nz×NxA matrix of (a);
step S507: according to the positions of the sampling points from the peak frequency point of the compressed signal calculated in the formula (52), the corresponding peak value of each sampling point from the compressed signal is taken out and expressed as,
wherein r isqRepresents the qth pixel point mqDistance history to all radar sampling points, SrqIs Nz×NxA matrix of (a);
mq=∑Srq.*exp{jφq} (55)
wherein, ". x" represents the multiplication of the corresponding positions of the matrix elements, and "Σ" represents the addition of the matrix elements;
step S509: and (4) observing iteration of regional pixel points, wherein the serial number q of the regional pixel points is q +1, and if q is less than or equal to Mx×My×MzIf i is equal to 1, repeating the steps S502 to S508, otherwise, executing step S510, and then obtaining the values of all the area pixels as:
m={mq} (56)
wherein m isqThe value of the qth pixel point of the observation area is represented, and M is Mx×My×MzA three-dimensional matrix of (a);
step S510: area reconstruction, namely reconstructing a pixel point value m of an observation area and a pixel point phase PHI of the observation area3DAnd the corresponding multiplication is carried out to complete the reconstruction of the pixel of the observation area,
mcomplex=m.*exp{-j·PHI3D} (57)
wherein m is the value of a pixel point in the observation region, PHI3DFor the phase of the pixel point in the observation area, ". indicates that the corresponding elements of the matrix are multiplied.
As one of the solutions of the present disclosure, the present disclosure further provides a device for nonlinear progressive phase iterative imaging of a planar aperture in different areas, wherein a synthetic aperture or real aperture radar configuration with sampling points located on the same plane is formed based on moving a planar aperture three-dimensional imaging radar along an elevation direction; the device comprises:
an error analysis module configured for performing a distance error analysis based on the stepped distance compensation factor calculation;
and the imaging module is configured for dividing the area of the sampling point based on the planar aperture radar, so that the distance history from any point of the observation area to the sampling point is obtained through iterative calculation of a step-by-step distance compensation factor, and the three-dimensional reconstruction of the observation area is completed.
In combination with the foregoing example, in some implementations, the error analysis module of the present disclosure may be further configured to:
obtaining an azimuth distance compensation factor;
obtaining an elevation direction distance compensation factor;
and acquiring smaller azimuth stepping distance and elevation stepping distance according to the azimuth distance compensation factor and the elevation distance compensation factor so as to reduce distance errors.
In combination with the foregoing example, in some implementations, the error analysis module of the present disclosure may be further configured to:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
obtaining an azimuth distance compensation factor based on the Meglanlin approximate distance;
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
and obtaining an elevation direction distance compensation factor based on the Meglanlin approximate distance.
In combination with the foregoing example, in some implementations, the error analysis module of the present disclosure may be further configured to:
combining the azimuth distance compensation factor and the elevation distance compensation factor, and based on the distance from any point in the observation area to the radar initial sampling point, step-by-step solving the distance process from any point in the observation area to the radar sampling point;
and step-by-step solving a distance error generated by a distance process from any point in the observation area to the radar sampling point.
In combination with the foregoing examples, in some implementations, the imaging module of the present disclosure may be further configured to:
compressing echo signals at each sampling point of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, each pixel area having a plurality of pixels, comprising: setting parameters, including the size of each pixel and the coordinate of the selected initial pixel point; calculating a distance compensation factor coefficient; calculating the size and the number of pixel areas;
the method for calculating the phase of each pixel point in the imaging area by adopting a partition stepping phase iteration method comprises the following steps: calculating an initial phase and a phase compensation factor based on the number of pixel point regions, the number of pixels contained in each pixel point region, the radar center frequency, a distance compensation factor coefficient and an initial value; and (3) iteratively calculating the phase of the pixel point along the azimuth direction, including repeatedly performing: distance direction iteration, elevation direction iteration and pixel point area iteration;
calculating the size of a sampling point area according to the step-by-step distance solving error, and dividing the plane synthetic aperture into a plurality of sampling point areas, comprising: selecting a pixel point of an observation area and a radar initial sampling point; calculating the distance process from the pixel point of the observation area to the radar initial sampling point; calculating an azimuth distance compensation factor coefficient and an elevation distance compensation factor coefficient; solving a Meglanlin approximation of the distance history; solving an actual distance process; calculating the size of the sampling point area and the number of divided areas, and calculating the sampling point area according to the phase error limiting condition;
calculating the value of each pixel point of the observation area point by point, comprising the following steps: and performing point-by-point reconstruction on each pixel point of the observation area based on an algorithm to realize three-dimensional resolution imaging of the observation area.
Specifically, one of the inventive concepts of the present disclosure is directed to forming a synthetic aperture radar or a real aperture radar with sampling points located on the same plane by moving at least a planar aperture three-dimensional imaging radar along an elevation direction; performing distance error analysis based on the calculation of the stepping distance compensation factor; based on the area division of the sampling points of the planar aperture radar, the distance process from any point of an observation area to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to complete three-dimensional reconstruction of the observation area, so that the sampling points of the planar aperture radar can be divided into areas, and in each area, the distance from the pixel points of the observation area to the sampling points is calculated in a stepping iteration mode, so that root index operation during distance calculation is reduced, and the algorithm efficiency is improved; the specific method steps for calculating the distance compensation factors are deduced in the embodiments of the disclosure, meanwhile, the distance error caused by the distance process of stepwise distance iterative solution is analyzed, and the maximum range of sampling point area division is calculated according to the phase error condition; the method realizes area division of the observation area, solves the phase of the pixel point of the observation area by adopting a stepping phase iteration method, solves the defect of the phase of the observation area by adopting a phase compensation factor instead of point by point, reduces root index operation during phase solution, and further improves algorithm efficiency.
As one of the solutions of the present disclosure, the present disclosure further provides a computer-readable storage medium, on which computer-executable instructions are stored, and when the computer-executable instructions are executed by a processor, the method for partitioned nonlinear progressive phase iterative imaging according to a planar aperture mainly includes:
moving the planar aperture three-dimensional imaging radar along the elevation direction to form a synthetic aperture or real aperture radar with sampling points located on the same plane;
performing distance error analysis based on the calculation of the stepping distance compensation factor;
based on the area division of the sampling points of the planar aperture radar, the distance process from any point of the observation area to the sampling points is obtained through iterative calculation of the stepping distance compensation factors, so that the three-dimensional reconstruction of the observation area is completed.
In some embodiments, a processor executing computer-executable instructions may be a processing device including more than one general-purpose processing device, such as a microprocessor, Central Processing Unit (CPU), Graphics Processing Unit (GPU), or the like. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, Reduced Instruction Set Computing (RISC) microprocessor, Very Long Instruction Word (VLIW) microprocessor, processor running other instruction sets, or processors running a combination of instruction sets. The processor may also be one or more special-purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
In some embodiments, the computer-readable storage medium may be a memory, such as a read-only memory (ROM), a random-access memory (RAM), a phase-change random-access memory (PRAM), a static random-access memory (SRAM), a dynamic random-access memory (DRAM), an electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), a flash disk or other form of flash memory, a cache, a register, a static memory, a compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD) or other optical storage, a tape cartridge or other magnetic storage device, or any other potentially non-transitory medium that may be used to store information or instructions that may be accessed by a computer device, and so forth.
In some embodiments, the computer-executable instructions may be implemented as a plurality of program modules that collectively implement the method of planar aperture split-zone non-linear progressive phase iterative imaging according to any one of the present disclosure.
The present disclosure describes various operations or functions that may be implemented as or defined as software code or instructions. The display unit may be implemented as software code or modules of instructions stored on a memory, which when executed by a processor may implement the respective steps and methods.
Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). A software implementation of the embodiments described herein may be provided through an article of manufacture having code or instructions stored thereon, or through a method of operating a communication interface to transmit data through the communication interface. A machine or computer-readable storage medium may cause a machine to perform the functions or operations described, and includes any mechanism for storing information in a form accessible by a machine (e.g., a computing display device, an electronic system, etc.), such as recordable/non-recordable media (e.g., Read Only Memory (ROM), Random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory display devices, etc.). The communication interface includes any mechanism for interfacing with any of a hardwired, wireless, optical, etc. medium to communicate with other display devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or transmitting signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more versions thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the foregoing detailed description, various features may be grouped together to streamline the disclosure. This should not be interpreted as an intention that a disclosed feature not claimed is essential to any claim. Rather, the subject matter of the present disclosure may lie in less than all features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with each other in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which is not intended to limit the present disclosure, and the scope of the present disclosure is defined by the claims. Various modifications and equivalents of the disclosure may occur to those skilled in the art within the spirit and scope of the disclosure, and such modifications and equivalents are considered to be within the scope of the disclosure.
Claims (10)
1. The method for the nonlinear progressive phase iterative imaging of the planar aperture subareas comprises the following steps:
moving the planar aperture three-dimensional imaging radar along the elevation direction to form a synthetic aperture or real aperture radar with sampling points located on the same plane;
performing distance error analysis based on the calculation of the stepping distance compensation factor;
based on the area division of the sampling points of the planar aperture radar, the distance process from any point of the observation area to the sampling points is obtained through iterative calculation of the stepping distance compensation factors, so that the three-dimensional reconstruction of the observation area is completed.
2. The method of claim 1, wherein the performing a distance error analysis based on the step-wise distance compensation factor calculation comprises:
obtaining an azimuth distance compensation factor;
obtaining an elevation direction distance compensation factor;
and acquiring smaller azimuth stepping distance and elevation stepping distance according to the azimuth distance compensation factor and the elevation distance compensation factor so as to reduce distance errors.
3. The method of claim 2, wherein,
the obtaining of the azimuth distance compensation factor includes:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
obtaining an azimuth distance compensation factor based on the Meglanlin approximate distance;
the obtaining of the elevation distance compensation factor includes:
setting the stepping times of the radar sampling point relative to the radar initial sampling point along the azimuth direction and the stepping times along the elevation direction to obtain a Memorelin expression of the radar sampling point;
obtaining a Merlolene approximate distance based on the processing of the Merlolene expression;
and obtaining an elevation direction distance compensation factor based on the Meglanlin approximate distance.
4. The method of claim 3, wherein performing a distance error analysis comprises:
combining the azimuth distance compensation factor and the elevation distance compensation factor, and based on the distance from any point in the observation area to the radar initial sampling point, step-by-step solving the distance process from any point in the observation area to the radar sampling point;
and step-by-step solving a distance error generated by a distance process from any point in the observation area to the radar sampling point.
5. The method according to claim 4, wherein the area division of the sampling points based on the planar aperture radar is such that the distance history from any point in the observation area to the sampling point is obtained by iterative calculation of a step-by-step distance compensation factor, and comprises:
compressing echo signals at each sampling point of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, wherein each pixel area is provided with a plurality of pixels;
calculating the phase of each pixel point in the imaging area by adopting a regional stepping phase iteration method;
calculating the size of a sampling point area according to the stepping distance solving error, and dividing the plane synthetic aperture into a plurality of sampling point areas;
and calculating the value of each pixel point in the observation area point by point.
6. The method of claim 5, wherein the dividing an imaging area into a number of pixel areas, each pixel area having a number of pixels, comprises:
setting parameters, including the size of each pixel and the coordinate of the selected initial pixel point;
calculating a distance compensation factor coefficient;
and calculating the size of the pixel area and the number of the pixel areas.
7. The method of claim 6, wherein the calculating the phase of each pixel point in the imaging region by using the partition stepping phase iteration method comprises:
calculating an initial phase and a phase compensation factor based on the number of pixel point regions, the number of pixels contained in each pixel point region, the radar center frequency, a distance compensation factor coefficient and an initial value;
and (3) iteratively calculating the phase of the pixel point along the azimuth direction, including repeatedly performing: distance direction iteration, elevation direction iteration and pixel point area iteration.
8. The method of claim 7, wherein the method of non-linear progressive phase iterative imaging in the planar aperture sub-region comprises:
selecting a pixel point of an observation area and a radar initial sampling point;
calculating the distance process from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient and an elevation distance compensation factor coefficient;
solving a Meglanlin approximation of the distance history;
solving an actual distance process;
and calculating the size of the sampling point area and the number of the divided areas, and calculating the sampling point area according to the phase error limiting condition.
9. The method of claim 8, wherein computing the value of each pixel point of the observation region on a point-by-point basis comprises:
and performing point-by-point reconstruction on each pixel point of the observation area based on an algorithm to realize three-dimensional resolution imaging of the observation area.
10. The device for the area-divided nonlinear progressive phase iterative imaging of the planar aperture forms the configuration of a synthetic aperture or a real aperture radar with sampling points positioned on the same plane by moving a planar aperture three-dimensional imaging radar along the elevation direction; the device comprises:
an error analysis module configured for performing a distance error analysis based on the stepped distance compensation factor calculation;
and the imaging module is configured for dividing the area of the sampling point based on the planar aperture radar, so that the distance history from any point in the observation area to the sampling point is obtained through iterative calculation of a stepping distance compensation factor, and the three-dimensional reconstruction of the observation area is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011609241.8A CN112835039B (en) | 2020-12-30 | 2020-12-30 | Planar aperture zoned nonlinear progressive phase iterative imaging method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011609241.8A CN112835039B (en) | 2020-12-30 | 2020-12-30 | Planar aperture zoned nonlinear progressive phase iterative imaging method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112835039A true CN112835039A (en) | 2021-05-25 |
CN112835039B CN112835039B (en) | 2023-09-08 |
Family
ID=75925412
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011609241.8A Active CN112835039B (en) | 2020-12-30 | 2020-12-30 | Planar aperture zoned nonlinear progressive phase iterative imaging method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112835039B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114384516A (en) * | 2022-01-12 | 2022-04-22 | 电子科技大学 | Real-aperture radar real-time angle super-resolution method based on detection before reconstruction |
Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3950747A (en) * | 1975-02-28 | 1976-04-13 | International Telephone & Telegraph Corporation | Optical processing system for synthetic aperture radar |
US4924229A (en) * | 1989-09-14 | 1990-05-08 | The United States Of America As Represented By The United States Department Of Energy | Phase correction system for automatic focusing of synthetic aperture radar |
JP2001074832A (en) * | 1999-09-02 | 2001-03-23 | Mitsubishi Electric Corp | Radar device |
US20040090359A1 (en) * | 2001-03-16 | 2004-05-13 | Mcmakin Douglas L. | Detecting concealed objects at a checkpoint |
JP2004198275A (en) * | 2002-12-19 | 2004-07-15 | Mitsubishi Electric Corp | Synthetic aperture radar system, and image reproducing method |
EP1949132A2 (en) * | 2005-11-09 | 2008-07-30 | Qinetiq Limited | Passive detection apparatus |
CN101900812A (en) * | 2009-05-25 | 2010-12-01 | 中国科学院电子学研究所 | Three-dimensional imaging method in widefield polar format for circular synthetic aperture radar |
CN101983034A (en) * | 2008-03-31 | 2011-03-02 | 皇家飞利浦电子股份有限公司 | Fast tomosynthesis scanner apparatus and ct-based method based on rotational step-and-shoot image acquisition without focal spot motion during continuous tube movement for use in cone-beam volume ct mammography imaging |
JP2012189445A (en) * | 2011-03-10 | 2012-10-04 | Panasonic Corp | Object detection device and object detection method |
CN104237885A (en) * | 2014-09-15 | 2014-12-24 | 西安电子科技大学 | Synthetic aperture radar image orientation secondary focusing method |
CN105487065A (en) * | 2016-01-08 | 2016-04-13 | 香港理工大学深圳研究院 | Time sequence satellite borne radar data processing method and device |
CN106054183A (en) * | 2016-04-29 | 2016-10-26 | 深圳市太赫兹科技创新研究院有限公司 | Three-dimensional image reconstruction method and device based on synthetic aperture radar imaging |
EP3144702A1 (en) * | 2015-09-17 | 2017-03-22 | Institute of Electronics, Chinese Academy of Sciences | Method and device for synthethic aperture radar imaging based on non-linear frequency modulation signal |
CN106772369A (en) * | 2016-12-16 | 2017-05-31 | 北京华航无线电测量研究所 | A kind of three-D imaging method based on various visual angles imaging |
CN107238866A (en) * | 2017-05-26 | 2017-10-10 | 西安电子科技大学 | Millimeter wave video imaging system and method based on synthetic aperture technique |
CN109597076A (en) * | 2018-12-29 | 2019-04-09 | 内蒙古工业大学 | Data processing method and device for ground synthetic aperture radar |
US20190243022A1 (en) * | 2016-10-31 | 2019-08-08 | China Communication Technology Co., Ltd. | Close Range Microwave Imaging Method and System |
CN110542900A (en) * | 2019-10-12 | 2019-12-06 | 南京航空航天大学 | SAR imaging method and system |
US20200049776A1 (en) * | 2018-08-07 | 2020-02-13 | Alexander Wood | Quantum spin magnetometer |
CN110793543A (en) * | 2019-10-21 | 2020-02-14 | 国网电力科学研究院有限公司 | Positioning and navigation precision measuring device and method of power inspection robot based on laser scanning |
CN110837128A (en) * | 2019-11-26 | 2020-02-25 | 内蒙古工业大学 | Imaging method of cylindrical array radar |
CN110837127A (en) * | 2019-11-26 | 2020-02-25 | 内蒙古工业大学 | Sparse antenna layout method based on cylindrical radar imaging device |
US20200319331A1 (en) * | 2019-04-04 | 2020-10-08 | Battelle Memorial Institute | Imaging Systems and Related Methods Including Radar Imaging with Moving Arrays or Moving Targets |
CN112034460A (en) * | 2020-08-17 | 2020-12-04 | 宋千 | Circular-arc aperture radar imaging method based on antenna phase directional diagram compensation and radar |
-
2020
- 2020-12-30 CN CN202011609241.8A patent/CN112835039B/en active Active
Patent Citations (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3950747A (en) * | 1975-02-28 | 1976-04-13 | International Telephone & Telegraph Corporation | Optical processing system for synthetic aperture radar |
US4924229A (en) * | 1989-09-14 | 1990-05-08 | The United States Of America As Represented By The United States Department Of Energy | Phase correction system for automatic focusing of synthetic aperture radar |
JP2001074832A (en) * | 1999-09-02 | 2001-03-23 | Mitsubishi Electric Corp | Radar device |
US20040090359A1 (en) * | 2001-03-16 | 2004-05-13 | Mcmakin Douglas L. | Detecting concealed objects at a checkpoint |
JP2004198275A (en) * | 2002-12-19 | 2004-07-15 | Mitsubishi Electric Corp | Synthetic aperture radar system, and image reproducing method |
EP1949132A2 (en) * | 2005-11-09 | 2008-07-30 | Qinetiq Limited | Passive detection apparatus |
CN101983034A (en) * | 2008-03-31 | 2011-03-02 | 皇家飞利浦电子股份有限公司 | Fast tomosynthesis scanner apparatus and ct-based method based on rotational step-and-shoot image acquisition without focal spot motion during continuous tube movement for use in cone-beam volume ct mammography imaging |
CN101900812A (en) * | 2009-05-25 | 2010-12-01 | 中国科学院电子学研究所 | Three-dimensional imaging method in widefield polar format for circular synthetic aperture radar |
JP2012189445A (en) * | 2011-03-10 | 2012-10-04 | Panasonic Corp | Object detection device and object detection method |
CN104237885A (en) * | 2014-09-15 | 2014-12-24 | 西安电子科技大学 | Synthetic aperture radar image orientation secondary focusing method |
EP3144702A1 (en) * | 2015-09-17 | 2017-03-22 | Institute of Electronics, Chinese Academy of Sciences | Method and device for synthethic aperture radar imaging based on non-linear frequency modulation signal |
CN105487065A (en) * | 2016-01-08 | 2016-04-13 | 香港理工大学深圳研究院 | Time sequence satellite borne radar data processing method and device |
CN106054183A (en) * | 2016-04-29 | 2016-10-26 | 深圳市太赫兹科技创新研究院有限公司 | Three-dimensional image reconstruction method and device based on synthetic aperture radar imaging |
WO2017198162A1 (en) * | 2016-04-29 | 2017-11-23 | 深圳市太赫兹科技创新研究院有限公司 | Three-dimensional image rebuilding method and device based on synthetic aperture radar imaging |
US20190243022A1 (en) * | 2016-10-31 | 2019-08-08 | China Communication Technology Co., Ltd. | Close Range Microwave Imaging Method and System |
CN106772369A (en) * | 2016-12-16 | 2017-05-31 | 北京华航无线电测量研究所 | A kind of three-D imaging method based on various visual angles imaging |
CN107238866A (en) * | 2017-05-26 | 2017-10-10 | 西安电子科技大学 | Millimeter wave video imaging system and method based on synthetic aperture technique |
US20200049776A1 (en) * | 2018-08-07 | 2020-02-13 | Alexander Wood | Quantum spin magnetometer |
CN109597076A (en) * | 2018-12-29 | 2019-04-09 | 内蒙古工业大学 | Data processing method and device for ground synthetic aperture radar |
US20200319331A1 (en) * | 2019-04-04 | 2020-10-08 | Battelle Memorial Institute | Imaging Systems and Related Methods Including Radar Imaging with Moving Arrays or Moving Targets |
CN110542900A (en) * | 2019-10-12 | 2019-12-06 | 南京航空航天大学 | SAR imaging method and system |
CN110793543A (en) * | 2019-10-21 | 2020-02-14 | 国网电力科学研究院有限公司 | Positioning and navigation precision measuring device and method of power inspection robot based on laser scanning |
CN110837128A (en) * | 2019-11-26 | 2020-02-25 | 内蒙古工业大学 | Imaging method of cylindrical array radar |
CN110837127A (en) * | 2019-11-26 | 2020-02-25 | 内蒙古工业大学 | Sparse antenna layout method based on cylindrical radar imaging device |
CN112034460A (en) * | 2020-08-17 | 2020-12-04 | 宋千 | Circular-arc aperture radar imaging method based on antenna phase directional diagram compensation and radar |
Non-Patent Citations (11)
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114384516A (en) * | 2022-01-12 | 2022-04-22 | 电子科技大学 | Real-aperture radar real-time angle super-resolution method based on detection before reconstruction |
Also Published As
Publication number | Publication date |
---|---|
CN112835039B (en) | 2023-09-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11544893B2 (en) | Systems and methods for data deletion | |
CN109300190B (en) | Three-dimensional data processing method, device, equipment and storage medium | |
US9990753B1 (en) | Image stitching | |
US10140324B2 (en) | Processing spatiotemporal data records | |
CN111210465B (en) | Image registration method, image registration device, computer equipment and readable storage medium | |
CN107610054A (en) | A kind of preprocess method of remote sensing image data | |
US20170083823A1 (en) | Spectral Optimal Gridding: An Improved Multivariate Regression Analyses and Sampling Error Estimation | |
CN103018741A (en) | Interferometric synthetic aperture radar (InSAR) imaging and flat ground removing integral method based on back projection | |
CN103839238A (en) | SAR image super-resolution method based on marginal information and deconvolution | |
CN112835039A (en) | Method and device for planar aperture regional nonlinear progressive phase iterative imaging | |
CN110927065B (en) | Remote sensing assisted lake and reservoir chl-a concentration spatial interpolation method optimization method and device | |
CN103076608B (en) | Contour-enhanced beaming-type synthetic aperture radar imaging method | |
CN102542547B (en) | Hyperspectral image fusion method based on spectrum restrain | |
CN112799064A (en) | Method and device for cylindrical surface aperture nonlinear progressive phase iterative imaging | |
KR101058773B1 (en) | Image Processing Method and System of Synthetic Open Surface Radar | |
CN112329920A (en) | Unsupervised training method and unsupervised training device for magnetic resonance parameter imaging model | |
CN115222776B (en) | Matching auxiliary visual target tracking method and device, electronic equipment and storage medium | |
CN111505738A (en) | Method and equipment for predicting meteorological factors in numerical weather forecast | |
CN112835040B (en) | Spherical aperture zoned progressive phase iterative imaging method and device | |
Teßmann et al. | GPU accelerated normalized mutual information and B-Spline transformation. | |
NO20200616A1 (en) | Gridding global data into a minimally distorted global raster | |
CN115880249B (en) | Image-based object segmentation method, device, equipment and medium | |
CN116128727B (en) | Super-resolution method, system, equipment and medium for polarized radar image | |
CN111080594B (en) | Slice marker determination method, computer device, and readable storage medium | |
Mugnai et al. | High-resolution monitoring of landslides with UAS photogrammetry and digital image correlation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |