CN112835040B - Spherical aperture zoned progressive phase iterative imaging method and device - Google Patents
Spherical aperture zoned progressive phase iterative imaging method and device Download PDFInfo
- Publication number
- CN112835040B CN112835040B CN202011609245.6A CN202011609245A CN112835040B CN 112835040 B CN112835040 B CN 112835040B CN 202011609245 A CN202011609245 A CN 202011609245A CN 112835040 B CN112835040 B CN 112835040B
- Authority
- CN
- China
- Prior art keywords
- distance
- point
- radar
- pixel
- sampling point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
Abstract
The disclosure relates to a method and a device for spherical aperture zoned progressive phase iterative imaging, wherein the method comprises the following steps: forming a synthetic aperture or a real aperture radar with apertures positioned on the same spherical surface through azimuth-pitching rotation; calculating based on a stepping distance compensation factor, and analyzing a distance error; based on the regional division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed. The device comprises: the error analysis module and the imaging module. According to the embodiment of the invention, the three-dimensional imaging efficiency can be greatly improved by constructing the regional nonlinear progressive Doppler compensation factor to replace a global point-by-point superposition imaging mode.
Description
Technical Field
The disclosure relates to the field of radar three-dimensional imaging, in particular to a method and a device for spherical aperture zoned progressive phase iterative imaging.
Background
Compared with other traditional micro-change monitoring means such as GPS, the micro-change monitoring radar has the advantages of all weather, large range, high precision and the like, and is widely applied to micro-deformation detection of high and steep slopes, bridge buildings and the like. The three-dimensional imaging of the micro-change monitoring radar can realize three-dimensional resolution imaging and three-dimensional deformation information extraction of an observation area, can effectively inhibit the phenomena of overlapping and covering, top-bottom inversion and the like caused by observation geometry, and has wide application prospects in the aspects of slope landslide and artificial building monitoring.
Because the observation area is large, the existing imaging algorithm is difficult to reconstruct the three-dimensional of the large scene area; the three-dimensional backward projection algorithm can reconstruct the region of interest in three dimensions by means of point-by-point reconstruction, and has certain advantages, but because the distance course is calculated point by point, the calculated amount is high, the imaging efficiency is affected, and the real-time monitoring requirement is difficult to meet. In summary, a fast and accurate reconstruction of a large field of view region is not possible.
Disclosure of Invention
The disclosure aims to provide a method and a device for spherical aperture regional progressive phase iterative imaging, which are used for greatly improving three-dimensional imaging efficiency by constructing a regional nonlinear progressive Doppler compensation factor to replace a global point-by-point superposition imaging mode.
According to one aspect of the present disclosure, there is provided a method of spherical aperture zoned progressive phase iterative imaging, comprising:
forming a synthetic aperture or a real aperture radar with apertures positioned on the same spherical surface through azimuth-pitching rotation;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the regional division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed.
In some embodiments, wherein the step-based distance compensation factor calculation performs a distance error analysis, comprising:
obtaining an azimuth distance compensation factor;
obtaining a pitching distance compensation factor;
and acquiring smaller azimuth stepping angles and pitch stepping angles according to the azimuth distance compensation factors and the pitch distance compensation factors so as to reduce distance errors.
In some embodiments, the method comprises, among other things,
the obtaining the azimuth distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
obtaining an azimuth distance compensation factor based on the Maclalin approximate distance;
the obtaining the pitching distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
and obtaining a pitching direction distance compensation factor based on the Maclalin approximate distance.
In some embodiments, wherein performing the distance error analysis comprises:
combining the azimuth distance compensation factor and the pitching distance compensation factor, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
and solving the distance error generated by the distance process from any point of the observation area to the radar sampling point in a stepping way.
In some embodiments, the spherical aperture zoned nonlinear progressive phase iterative imaging method comprises:
compressing echo signals at all sampling points of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, wherein each pixel area is provided with a plurality of pixels;
calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iteration method;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the spherical synthetic aperture into a plurality of sampling point regions;
the value of each pixel point of the observation area is calculated point by point.
In some embodiments, the dividing the imaging region into a plurality of pixel regions, each pixel region having a plurality of pixels, includes:
According to the obtained radar resolution, the observation area is divided into a plurality of pixels, so that the size of each pixel along the distance direction, the azimuth direction and the pitching direction is respectively smaller than the radar distance direction, the azimuth direction and the pitching direction resolution.
In some embodiments, the calculating the phase of each pixel point of the imaging area by using the area stepping phase iterative method includes:
calculating initial pixel point phase and phase compensation factors based on the near distance of an observation area, the distance direction size of a pixel, the radar center frequency, the stepping initial value and the number of distance direction pixels;
iteratively calculating the phase of the pixel points along the distance direction until the stepping initial value is larger than the number of the pixels along the distance direction;
outputting a pixel point phase matrix along the distance direction;
based on the expansion of the pixel point phase matrix along the distance direction, the phase of the pixel points located at the same radial distance is calculated iteratively.
In some embodiments, the calculating the sampling point area size according to the step-by-step distance solving error and dividing the spherical synthetic aperture into a plurality of sampling point areas includes:
selecting an observation area pixel point and a radar initial sampling point;
calculating the distance course from the pixel point of the observation area to the radar initial sampling point;
Calculating an azimuth distance compensation factor coefficient and a pitching distance compensation factor coefficient;
solving maxwell Lin Jinshi of the distance history;
solving an actual distance course;
and calculating the size of the sampling point region and the number of the dividing regions, and calculating the sampling point region according to the phase error limiting condition.
In some embodiments, wherein calculating the value of each pixel of the observation region point by point comprises:
and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
According to one of the schemes of the present disclosure, a device for spherical aperture zoned progressive phase iterative imaging is provided, which is used for forming a synthetic aperture or a real aperture radar with apertures positioned on the same sphere through azimuth-elevation rotation; the device comprises:
an error analysis module configured to perform a distance error analysis based on the step-wise distance compensation factor calculation;
the imaging module is configured to be used for carrying out iterative calculation on the distance course from any point of the observation area to the sampling point through a stepping distance compensation factor based on the regional division of the sampling point of the spherical aperture radar so as to finish the three-dimensional reconstruction of the observation area.
According to one aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, implement:
according to the method for the regional progressive phase iterative imaging of the spherical aperture.
The method and the device for spherical aperture zoned progressive phase iterative imaging of various embodiments of the present disclosure form a synthetic aperture or a real aperture radar with apertures located on the same sphere at least through azimuth-elevation rotation; calculating based on a stepping distance compensation factor, and analyzing a distance error; based on the region division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to complete three-dimensional reconstruction of the observation region, thereby realizing region division of the sampling points of the spherical synthetic aperture radar, and in each region, the distance from the pixel points of the observation region to the sampling points is calculated in a stepping iterative mode, so that root index operation in the process of distance calculation is reduced, and the algorithm efficiency is improved; the specific method steps for calculating the distance compensation factor are deduced, meanwhile, the distance error caused by the step-by-step distance iteration solving distance process is analyzed, and the maximum range of sampling point region division is calculated according to the phase error condition; according to the specificity of the imaging area, the method for iteratively solving the pixel point phase of the observation area is provided, root index operation during the pixel point phase solving is reduced, and the algorithm efficiency is further improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure, as claimed.
Drawings
In the drawings, which are not necessarily to scale, like reference numerals in different views may designate like components. Like reference numerals with letter suffixes or like reference numerals with different letter suffixes may represent different instances of similar components. The accompanying drawings generally illustrate various embodiments by way of example, and not by way of limitation, and are used in conjunction with the description and claims to explain the disclosed embodiments.
FIG. 1 is a schematic diagram of a spherical aperture micro-variation monitoring radar according to an embodiment of the disclosure;
FIG. 2 is a schematic diagram of a step-wise distance solution according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of solving a azimuthal distance compensation factor according to an embodiment of the disclosure;
FIG. 4 is a schematic diagram of a pitch distance compensation factor solution according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of an error analysis of an embodiment of the present disclosure;
FIG. 6 is a schematic view of a pixel division of an observation area according to an embodiment of the disclosure;
FIG. 7 is a diagram of an embodiment of the present disclosurePhase matrix PHI of pixel points in observation area 3D ;
Fig. 8 is a schematic view of three-dimensional reconstruction of a pixel point of an observation area according to an embodiment of the disclosure;
FIG. 9 is a step-wise distance calculation flow chart of an embodiment of the present disclosure;
FIG. 10 is a block diagram illustrating a pixel division of an observation area according to an embodiment of the present disclosure;
FIG. 11 is a sample point region division flow chart according to an embodiment of the present disclosure;
FIG. 12 is a flow chart of three-dimensional reconstruction of an observation region according to an embodiment of the present disclosure;
fig. 13 is a flow chart of an imaging method according to an embodiment of the disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more apparent, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings of the embodiments of the present disclosure. It will be apparent that the described embodiments are some, but not all, of the embodiments of the present disclosure. All other embodiments, which can be made by one of ordinary skill in the art without the need for inventive faculty, are within the scope of the present disclosure, based on the described embodiments of the present disclosure.
Unless defined otherwise, technical or scientific terms used in this disclosure should be given the ordinary meaning as understood by one of ordinary skill in the art to which this disclosure belongs. The word "comprising" or "comprises", and the like, means that elements or items preceding the word are included in the element or item listed after the word and equivalents thereof, but does not exclude other elements or items.
In order to keep the following description of the embodiments of the present disclosure clear and concise, the present disclosure omits detailed description of known functions and known components.
As shown in fig. 1, the spherical aperture three-dimensional imaging micro-variation monitoring radar is a synthetic aperture or real aperture radar with apertures located on the same sphere by, but not limited to, azimuth-elevation rotation of a turntable. The radar can realize the aim of observing the areaThree-dimensional resolution imaging. In fig. 1, radar sampling pointsDistributed in the hemisphere on the right side of the plane XOZ, ρ represents the sampling point P Radar The radial distance to the origin O, θ represents the sampling point P Radar An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>Representing the sampling point P Radar An angle theta between the connecting line with the origin O and the positive direction of the Z axis SynA For the azimuth synthetic aperture angle, Φ SynE For pitch-wise synthetic aperture angle Δθ represents radar azimuth sampling interval, +.>Representing radar elevation sampling interval, N θ Indicating azimuth sampling points, < >>Representing the number of pitch samples. It should be noted that: only θ=pi/2 and +.>The sample point locations of (2) are not shown in the figure.
As one aspect, embodiments of the present disclosure provide a method of spherical aperture zoned progressive phase iterative imaging, comprising:
Forming a synthetic aperture or a real aperture radar with apertures positioned on the same spherical surface through azimuth-pitching rotation;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the regional division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed.
One of the inventive concepts of the present disclosure aims to realize region division of sampling points of a spherical synthetic aperture radar, and in each region, by calculating the distance from a pixel point of an observation region to the sampling point in a stepping iteration manner, root index operation during distance calculation is reduced, and algorithm efficiency is improved.
The present disclosure is directed to two main aspects, a step-wise distance compensation factor calculation and a distance error analysis, and a spherical aperture region nonlinear progressive phase iterative imaging method performed on the basis of the above. The subject of execution of the present disclosure is not limited as long as a device, apparatus, or specific imaging system that can obtain spherical aperture zoned progressive phase iterative imaging is satisfied.
In some embodiments, the step-based distance compensation factor calculation of the present disclosure, performing a distance error analysis, includes:
Obtaining an azimuth distance compensation factor;
obtaining a pitching distance compensation factor;
and acquiring smaller azimuth stepping angles and pitch stepping angles according to the azimuth distance compensation factors and the pitch distance compensation factors so as to reduce distance errors.
Specifically, firstly, distance compensation factors delta Th and delta Ph of azimuth and pitching directions are calculated; then analyze the error E of the step distance calculation r A calculation method of the maximum value of the sampling point region error is provided.
As shown in the figure 2 of the drawings,for any point of the observation area +.>Is a sampling point of the radar; r is R n For point P n A radial distance to the origin O of coordinates; θ n For point P n An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For point P n An included angle between the connecting line with the origin O and the positive direction of the Z axis; ρ is the radar sampling point P Radar Radial distance from origin O, θ being sampling point P Radar An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For sampling point P Radar And an included angle between the connecting line with the origin O and the positive direction of the Z axis.
Radar slave θ=θ 0 、I.e. start sampling point +.>The position is started, sampling is carried out along the azimuth direction and the pitching direction at intervals, and the azimuth direction and the pitching direction sampling intervals are respectively delta theta and delta theta>Radar sampling point coordinates +. >Middle θ, & gt>Can be expressed as:
θ=θ 0 +n θ ·Δθ (1)
wherein n is θ Representing the sampling point P Radar Relative to the radar start sampling point P Radar-Start Number of steps in azimuth, n θ =0,1,…,(N θ -1);Representing the sampling point P Radar Relative radar initial sampling pointP Radar-Start Number of steps in pitch, +.>N θ Indicating radar azimuth sampling points, < >>Representing radar pitching sampling points; any radar sampling point P Radar Denoted as->
Solving the distance history from any point of the observation area to the radar sampling point in a stepping mode, and calculating distance compensation factors delta Th and delta Ph of azimuth and pitching directions respectively; and further solving the distance course step by a distance compensation factor.
In some embodiments, the deriving the azimuthal distance compensation factor of the present disclosure comprises:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
obtaining an azimuth distance compensation factor based on the Maclalin approximate distance;
the obtaining the pitching distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
Obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
and obtaining a pitching direction distance compensation factor based on the Maclalin approximate distance.
Regarding the azimuth distance compensation factor Δth:
FIG. 3 shows an observation area at any pointTo the radar start sampling point->The distance of (2) is:
when calculating azimuth distance compensation factor, radar sampling pointRelative to the radar start sampling point P Radar-Start Number of steps in azimuth is n θ The number of steps along the pitch direction is 0; the coordinates of the sampling points are expressed as +.>Wherein n is θ =0,1,…,(N θ -1). Any point of the observation area->To the point->The distance of (2) is:
wherein R is n For point P n A radial distance to the origin O of coordinates; θ n For point P n The included angle between the projection line of the line with the origin O on the surface XOY and the positive direction of the X axis,for point P n An included angle between the connecting line with the origin O and the positive direction of the Z axis; ρ is the radar start sampling point P Radar-Start Radial distance from origin O, θ 0 For the radar start sampling point P Radar-Start An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For the radar start sampling point P Radar-Start An included angle between the connecting line with the origin O and the positive direction of the Z axis; n is n θ For radar sampling points +.>Relative to the initial sampling point P Radar-Start Number of steps in azimuth; Δθ is the radar sampling interval in azimuth.
Cosn in (4) θ Δθ、sinn θ The majulin (Maclaurin) formula for Δθ is expressed as:
wherein o is called Peano (Peano) remainder, n θ Δθ∈[0,1]。
The peano remainder o in formulas (5), (6) was ignored, and the distance calculation error due to the omission of the peano remainder was analyzed.
Ignoring the peano remainder o in formulas (5), (6), and substituting formulas (3), (5) and (6) into formula (4) to obtain any point of the observation areaTo radar sampling point->The majuline approximation distance is:
wherein, the liquid crystal display device comprises a liquid crystal display device,
the formula (7) is opposite to n θ The derivation is carried out, and the obtained azimuth distance compensation factors are as follows:
ΔTh(n θ )=2A θ ·Δθ 2 ·n θ +B θ ·Δθ (10)
wherein R is n For point P n A radial distance to the origin O of coordinates; θ n For point P n The included angle between the projection line of the line with the origin O on the surface XOY and the positive direction of the X axis,for point P n An included angle between the connecting line with the origin O and the positive direction of the Z axis; ρ is the radar start sampling point P Radar-Start Radial distance from origin O, θ 0 For the radar start sampling point P Radar-Start An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For the radar start sampling point P Radar-Start An included angle between the connecting line with the origin O and the positive direction of the Z axis; n is n θ Is radar sampling Point->Relative to the initial sampling point P Radar-Start Number of steps in azimuth; delta theta is the sampling interval of the radar along the azimuth direction; />For any point of the observation area->To radar sampling point- >Is a Maclalin approximation distance; />For any point of the observation area->To the radar start sampling point->Is a distance of (2); a is that θ 、B θ Is the azimuth distance compensation factor coefficient.
Regarding the pitch distance compensation factor Δph:
as shown in fig. 4, the observation area is at any pointTo the radar start sampling point->The distance of (2) is:
radar sampling point when calculating pitching direction distance compensation factorRelative to the radar start sampling point P Radar-Start The number of steps in azimuth is 0, and the number of steps in pitch is +.>The coordinates of the sampling points are expressed as +.>Wherein->Observation area arbitrary point +.>To the point->The distance of (2) is:
wherein R is n For point P n A radial distance to the origin O of coordinates; θ n For point P n The included angle between the projection line of the line with the origin O on the surface XOY and the positive direction of the X axis,for point P n An included angle between the connecting line with the origin O and the positive direction of the Z axis; ρ is the radar start sampling point P Radar-Start Radial distance from origin O, θ 0 For the radar start sampling point P Radar-Start An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For the radar start sampling point P Radar-Start An included angle between the connecting line with the origin O and the positive direction of the Z axis;for radar sampling points +.>Relative to the initial sampling point P Radar-Start The number of steps in pitch; />Sampling intervals along pitch for the radar.
the peano remainder o in formulas (13), (14) was ignored, and the distance calculation error due to the omission of the peano remainder was analyzed.
Ignoring the peano remainder o in formulas (13), (14), and substituting formulas (11), (13), and (14) into formula (12), thereby obtaining any point of the observation areaTo the point->The majuline approximation distance is: />
Wherein, the liquid crystal display device comprises a liquid crystal display device,
pairing (15)The derivation is carried out, and the obtained pitching direction distance compensation factors are as follows:
wherein R is n For point P n A radial distance to the origin O of coordinates; θ n For point P n The included angle between the projection line of the line with the origin O on the surface XOY and the positive direction of the X axis,for point P n An included angle between the connecting line with the origin O and the positive direction of the Z axis; ρ is the radar start sampling point P Radar-Start Radial distance from origin O, θ 0 For the radar start sampling point P Radar-Start An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For the radar start sampling point P Radar-Start An included angle between the connecting line with the origin O and the positive direction of the Z axis;for radar sampling points +.>Relative to the initial sampling point P Radar-Start The number of steps in pitch; />Sampling intervals along the pitch direction for the radar; />For any point of the observation area->To the point->Is a Maclalin approximation distance; />For any point of the observation area->To the radar start sampling point- >Is a distance of (2); />Is a pitch-to-distance compensation factor coefficient.
A flow chart of a step-by-step calculation distance history algorithm adopting a direction and pitch distance compensation factor is shown in FIG. 9, and any point in an observation area is observedRadar start sampling point +.>Radar azimuth and elevation sampling intervals delta theta and delta theta>The azimuth direction and the pitching direction number N of the sampling points in the region SubAzi 、N SubEle 。
In some implementations, the performing distance error analysis of the present disclosure includes:
combining the azimuth distance compensation factor and the pitching distance compensation factor, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
and solving the distance error generated by the distance process from any point of the observation area to the radar sampling point in a stepping way.
In particular, the distance compensation factorΔTh(n θ ) Are derived from equations (7) and (15), respectively, by ignoring the peano remainder o (x), i.e., the remaining finite term, so the azimuth step angle (n θ Δθ) and pitch step angle +.>The smaller the calculated distance error is, the smaller the distance error is.
The following analysis shows the distance error of the step-wise distance calculation after ignoring the peano remainder o (x) as shown in equations (5), (6), (13), (14).
As shown in figure 5 of the drawings,for any point of the observation area +. >For the radar start sampling point, < >>For the radar to step n along azimuth and pitch respectively θ 、/>Sub-sampling points.
Any point of the observation areaTo the radar start sampling point->Is expressed as the distance of (2)Observation area arbitrary point +.>To radar sampling point->The actual distance history of (2) is expressed as +.>While step-wise solving the observation area for any point +.>To radar sampling point->The distance history of (2) is:
wherein n is θ 、Radar sampling points +.>Relative to the initial sampling point P Radar-Start The number of steps in azimuth and pitch; Δθ,>the sampling intervals are respectively the radar azimuth and the pitching.
Solving any point of observation area by adopting stepping methodTo radar sampling point->The distance error Er generated by the distance history of (2) is expressed as:
wherein, the liquid crystal display device comprises a liquid crystal display device,for any point P of the observation area n To the radar start sampling point P Radar-Start Distance history of (2);for any point P of the observation area n To the sampling point->Is a real distance history of (1); n is n θ 、/>Radar sampling points +.>Relative to the initial sampling point P Radar-Start The number of steps in azimuth and pitch; Δθ,>the sampling intervals are respectively the radar azimuth and the pitching.
The geometric relationship is as follows:
wherein, the liquid crystal display device comprises a liquid crystal display device,for any point P of the observation area n To the radar start sampling point P Radar-Start And sample point->Is a distance history of (2); />For the radar start sampling point P Radar-Start To the sampling pointIs a constant arc length.
To sum up, when observing the region point P n Is positioned at a radar initial sampling point P Radar-Start And sampling pointWhen the extension line of the distance process is extended, the error Er of the step-by-step solving distance process is maximum.
In some embodiments, referring to fig. 13, the spherical aperture zoned nonlinear progressive phase iterative imaging method of the present disclosure includes:
compressing echo signals at all sampling points of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, wherein each pixel area is provided with a plurality of pixels;
calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iteration method;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the spherical synthetic aperture into a plurality of sampling point regions;
the value of each pixel point of the observation area is calculated point by point.
According to the embodiment of the disclosure, the sampling points of the spherical aperture radar are subjected to regional division, so that the distance process from any point of an observation region to the sampling points can be solved iteratively through a stepping compensation factor, and root index operation is reduced.
As shown in fig. 2, the radar slaveI.e. Point->The position is started to sample along the azimuth direction and the pitching direction at intervals respectively, and the azimuth direction is The pitch sampling interval is delta theta and +.>Sampling Point->Respectively stepping the radar in azimuth and pitching directions by n θ 、/>Sub-sampling points. Radar at sampling point +.>The echo equation is expressed as:
f={f min ,…,f max } (23)
wherein f is the step frequency of the radar echo, f min For the lowest frequency of radar stepping, f max For the highest frequency of radar stepping, C is the speed of light,for radar sampling points +.>Distance to the imaging region pixel point.
Specifically, the imaging method of the present embodiment includes:
step S1: distance compression, which compresses echo signals at each sampling point of the radar along the distance direction, can adopt an inverse Fourier transform (IFFT) mode, and the distance compressed signals are
Wherein f c For minesThe center frequency of the echo is reached, and (f) min +f max )/2;
Step S2: dividing pixel points of an observation area, and dividing an imaging area into M R ×M Θ ×M Φ Pixels, pixel coordinates expressed asWherein M is R 、M Θ 、M Φ Respectively representing the distance direction, azimuth direction and pitching direction pixel number, m of the imaging region Rn Is pixel point m n Radial distance from origin of coordinates O, m θn Is pixel point m n An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>Is pixel point m n An included angle between the connecting line with the origin O and the positive direction of the Z axis;
step S3: calculating the phase of each pixel point, and calculating the phase PHI of each pixel point in the imaging area by adopting a stepping phase iteration method 3D ;
Step S4: dividing the sphere sampling point region, calculating the size of the sampling point region according to the stepping distance solving error Er, and dividing the sphere synthetic aperture into I sampling point regions;
step S5: and (3) carrying out three-dimensional reconstruction on the observation area, calculating the value of each pixel point of the observation area point by point, and completing the three-dimensional reconstruction on the observation area.
Parameter settings in various embodiments of the present disclosure are for example: minimum frequency f of radar stepping min At 77GHz, the highest frequency f max 77.25GHz, the radar bandwidth B is 250M, and the frequency point gamma is 5001; the coordinate of the radar initial sampling point is P Radar-Start (0.6,60 DEG, 60 DEG), sampling interval delta theta of the radar along azimuth and pitching directions,The number of sampling points N in azimuth and pitch directions is 0.4 DEG θ 、/>151, the radar azimuth and pitching direction synthetic aperture angle theta SynA 、Φ SynE All 60 degrees; near m of observation area pixel point R-near 800m, distance to observation range m R-ob 500m; initial azimuth m of observation area θ-Start 60℃and a termination azimuth m θ-Stop 120 DEG, initial pitch->60 DEG, end pitch->Is 120 deg..
Regarding observation area pixel division: the division of an imaging region into a plurality of pixel regions, each pixel region having a plurality of pixels, includes:
According to the obtained radar resolution, the observation area is divided into a plurality of pixels, so that the size of each pixel along the distance direction, the azimuth direction and the pitching direction is respectively smaller than the radar distance direction, the azimuth direction and the pitching direction resolution.
Specifically, in step S2, the imaging region is divided into M R ×M Θ ×M Φ Pixels, according to parameter setting, the radar range resolution is C/(2.B); the radar azimuth angle resolution isRadar pitch angle resolution is +.>
Dividing the observation area into M according to the radar resolution calculated above R ×M Θ ×M Φ Each pixel has a size Deltam along the distance direction, the azimuth direction and the pitch direction R 、Δm θ 、Respectively smaller than radarDistance, azimuth, pitch resolution. It should be noted that the smaller the pixel size, the more clear the details of the reconstructed region image, but the greater the number of reconstructed pixels. And dividing imaging area pixels. The imaging region pixel division is shown in fig. 6.
Regarding the calculation of each pixel phase: the disclosed method for calculating the phase of each pixel point in an imaging area by adopting a stepped phase iteration method in a divided area comprises the following steps:
calculating initial pixel point phase and phase compensation factors based on the near distance of an observation area, the distance direction size of a pixel, the radar center frequency, the stepping initial value and the number of distance direction pixels;
Iteratively calculating the phase of the pixel points along the distance direction until the stepping initial value is larger than the number of the pixels along the distance direction;
outputting a pixel point phase matrix along the distance direction;
based on the expansion of the pixel point phase matrix along the distance direction, the phase of the pixel points located at the same radial distance is calculated iteratively.
Specifically, in step S3, a step-by-step phase iterative method is used to calculate the phase of each pixel, and the flowchart is shown in fig. 10. The method comprises the following specific steps:
step S31: parameter setting, wherein the near distance of an observation area is m R-near The pixel is along the distance by a dimension Δm R Radar center frequency f c Step initial value i mR Number of distance pixels M =1 R ;
Step S32: calculating the phase and phase compensation factor of the initial pixel point, wherein the radial distance between the initial pixel point of the observation area and the origin of coordinates O is m R-near The phase is expressed as:
the distance phase compensation factor Δphi is expressed as:
wherein f c The center frequency of radar echo, C is the speed of light, m R-near For a short distance of the observation area Δm R The size of the pixel of the observation area along the distance direction;
step S33: iteratively calculating the phase of the distance to the pixel point, i mR =i mR +1, if i Rm ≤ R M, iteratively calculating the phase of the distance pixel pointThe method comprises the following steps:
otherwise, step S34 is performed;
step S34: the pixel point phase matrix PHI of the output along the distance direction is:
Step S35: PHI will be expanded to PHI 3D Iteratively calculating the phases of the pixel points located at the same radial distance, wherein the phases of the pixel points located at the same radial distance are the same by geometric relationship, and expanding the phase matrix PHI to M R ×M Θ ×M Φ PHI of three-dimensional matrix of (2) 3D The phase of all pixels of the observation area can be obtained as shown in fig. 7.
Region division of spherical sampling points: the disclosed method for calculating the size of sampling point areas according to step-by-step distance solving errors and dividing the spherical synthetic aperture into a plurality of sampling point areas comprises the following steps:
selecting an observation area pixel point and a radar initial sampling point;
calculating the distance course from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient and a pitching distance compensation factor coefficient;
solving maxwell Lin Jinshi of the distance history;
solving an actual distance course;
and calculating the size of the sampling point region and the number of the dividing regions, and calculating the sampling point region according to the phase error limiting condition.
From the first partial step-wise distance error analysis, when observing the region point P n Is positioned at a radar initial sampling point P Radar-Start And sampling pointSince the error Er of the step-wise solving distance history is the largest when the extension line of (a) is extended, the maximum sampling point region in which the spherical synthetic aperture can be divided is calculated based on the phase constraint condition when the error is the largest.
For the convenience of calculation, when the sampling point region is divided, the sampling point is started from the radarInitially, the number N of azimuth sampling points is selected SubAzi And pitch sampling point number N SubEle The same sampling point area; and selecting the observation area pixel point +.>The partitionable maximum sampling point area is calculated, and the flowchart is shown in fig. 11. The specific steps are as follows:
step S41: selecting pixel points of an observation areaWith radar start sampling pointRadar sampling point coordinates +.>
Step S42: calculating pixel points of observation areaTo the radar start sampling pointDistance history of (2), i.e., +.in equation (3)>Replaced by->Expressed as:
wherein ρ is the radar initial sampling point P Radar-Start Radial distance from origin O, θ 0 For the radar start sampling point P Radar-Start The included angle between the projection line of the line with the origin O on the surface XOY and the positive direction of the X axis,for the radar start sampling point P Radar-Start An included angle between the connecting line with the origin O and the positive direction of the Z axis; m is m R-near For observing the close range of the area, m θ-Stop Terminating azimuth for observation area, +.>The pitch angle is terminated for the observation area.
Step S43: calculating an azimuth distance compensation factor coefficient A θ 、B θ In the formula (8) and the formula (9)Replaced by->Can be written separately as:
step S44: calculating pitch distance compensation factor coefficientIn formula (16) and formula (17)>Replaced by->Can be written separately as:
Wherein A is θ 、B θ 、As the distance compensation factor coefficient, ρ is the radar initial sampling point P Radar-Start Radial distance from origin O, θ 0 For the radar start sampling point P Radar-Start An angle between the projection line of the plane XOY connected with the origin O and the positive direction of the X axis,/>For the radar start sampling point P Radar-Start An included angle between the connecting line with the origin O and the positive direction of the Z axis; m is m R-near For observing the close range of the area, m θ-Stop Terminating azimuth for observation area, +.>The pitch angle is terminated for the observation area.
Step S45: maxwell Lin Jinshi for solving the distance history is obtained by the method of the formula (19)Replaced byResolvable region pixel>To radar sampling point->The distance history of (2) is:
wherein n is θ 、Radar sampling points +.>Relative to the initial sampling point P Radar-Start Number of steps in azimuth and pitch, Δθ, +.>The sampling intervals of the radar along the azimuth direction and the pitching direction are respectively set; a is that θ 、B θ For the azimuth distance compensation factor coefficient, +.>Is a pitch-to-distance compensation factor coefficient.
Step S46: solving the actual distance course and pixel pointTo radar sampling point->The actual distance of (2) is:
step S47: calculating the size of the sampling point region and the number of the divided regions, and calculating the sampling point region according to the phase error limiting condition, namely
Wherein f c For the center frequency of the radar echo,solving pixel points stepwise >To radar sampling point->Distance history of>Is pixel dot +.>To radar sampling point->Is the actual distance of (3); bringing formula (34) formula (35) into formula (36), when +.>When n is calculated θ 、/>Not N of maximum value of (2) Sub The method comprises the steps of carrying out a first treatment on the surface of the Thus, the radar sampling points are divided into the directions I Azi =N θ /N Sub Individual zones, in pitch ∈ ->The number of the areas is as follows:
I=I Azi ·I Ele (37)
Three-dimensional reconstruction of the observation region: the point-by-point calculation of the value of each pixel point of the observation area of the present disclosure includes:
and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
Specifically, in step S5, as shown in fig. 8, the algorithm performs point-by-point reconstruction on each pixel point of the observation area, so as to implement three-dimensional resolution imaging on the observation area, as shown in fig. 12. The method comprises the following steps:
step S501, parameter setting, radar echo distance compressed signal Sr and number M of pixel points in observation area R ×M Θ ×M Φ The number I of radar sampling point areas and the size N of each sampling point area Sub ×N Sub The radar start sampling point in the ith sampling point regionRadar azimuth sampling interval delta theta, radar elevation sampling interval +.>Phase PHI of pixel point 3D The number k of the pixel point of the observation area is 1, the initial value is 1, and the number i of the sampling point area is 1;
Step S502: calculating an initial distance and distance compensation for an i-th regionFactor, kth pixel point coordinatesThe starting point coordinate of the i-th sampling point area is +.>Calculating a pixel point m according to (3) k To sampling point P Radar-Start-i Is>Then respectively calculating compensation factor coefficient A according to (8), (9), (16) and (17) θ-i 、B θ-i 、/>Further, a distance compensation factor DeltaTh is calculated from the distances (10), (18) i (n θ )、/>
Step S503: iterative calculation of the kth pixel point m k A distance history from the i-th sampling point region to the pitching sampling point;
step S5031: calculating the initial distance to make the azimuth step number n θ Number of pitch steps =0Radar sampling point +.>For the start sampling point +.>The kth pixel point m k To radar sampling point P Radar-00-i The distance history of (2) is:
step S5032: iteratively calculating the distance history of pitching to the sampling point,if->Calculating the kth pixel point m k To the sampling point->The distance history of (2) is:
otherwise, step S54 is entered; wherein the method comprises the steps ofFor the ith area in pitch direction +.>A sub-step distance compensation factor;
step S504: iterative calculation of the kth pixel point m k Distance history to the i-th sampling point region sampling point;
step S5042: iteratively calculating the distance history of azimuth sampling points, n θ =n θ +1, if n θ <N Sub Calculating the kth pixel point m k To the sampling pointThe distance history of (2) is: />
Otherwise, step S5043 is entered;
step S5043: the iteration is in the pitch direction,if->Repeating step S5042; otherwise, step S505 is performed;
step S505: iterating the sampling point region, wherein the sampling point region number i=i+1, repeating the steps S502 to S504 if I is less than or equal to I, otherwise, entering the step S506, and obtaining the kth pixel point m at the moment k Distance history to all radar sampling points;
step S506: calculating pixel pointsThe peak position in the compressed signal is at each sample point distance,
wherein B is the radar signal bandwidth, in this example (f) max -f min ),f min To step the lowest frequency, f max To step the highest frequency τ k Is thatIs a matrix of (a);
wherein f c Is the radar center frequency, in this example denoted (f) max -f min )/2,φ k Is thatIs a matrix of (a);
step S508: taking out the corresponding peak value of the compressed signal from each sampling point according to the peak frequency point position of the compressed signal from each sampling point calculated in the formula (42),
wherein r is k Represents the kth pixel point m k Range history to all radar sampling points, sr k Is thatIs a matrix of (a);
m k =∑Sr k .*exp{jφ k } (45)
wherein "..x" represents multiplication of matrix element corresponding positions and "Σ" represents addition of matrix elements;
step S510: observing regional pixel iteration, wherein the regional pixel number k=k+1, and if k is less than or equal to M R ×M Θ ×M Φ Repeating steps S502 to S509, otherwise, executing the stepsS511, the values of all the regional pixel points are obtained as follows:
m={m k } (46)
wherein m is k Representing the value of the kth pixel point of the observation area, M is M R ×M Θ ×M Φ Is a three-dimensional matrix of (2);
step S511: reconstructing the region, namely combining the observed region pixel point value m with the observed region pixel point phase PHI 3D Corresponding multiplication, the reconstruction of the pixel of the observation area is completed,
m complex =m.*exp{-j·PHI 3D } (47)
wherein m is the value of the pixel point of the observation area, PHI 3D For the observation area pixel phase, "..x" represents the multiplication of matrix corresponding elements.
As one aspect of the present disclosure, the present disclosure also provides a method for forming a synthetic aperture or a real aperture radar with apertures located on the same sphere by azimuth-elevation rotation; the device comprises:
an error analysis module configured to perform a distance error analysis based on the step-wise distance compensation factor calculation;
the imaging module is configured to be used for carrying out iterative calculation on the distance course from any point of the observation area to the sampling point through a stepping distance compensation factor based on the regional division of the sampling point of the spherical aperture radar so as to finish the three-dimensional reconstruction of the observation area.
In combination with the foregoing examples, in some implementations, the error analysis module of the present disclosure may be further configured to:
obtaining an azimuth distance compensation factor;
obtaining a pitching distance compensation factor;
and acquiring smaller azimuth stepping angles and pitch stepping angles according to the azimuth distance compensation factors and the pitch distance compensation factors so as to reduce distance errors.
In combination with the foregoing examples, in some implementations, the error analysis module of the present disclosure may be further configured to:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
obtaining an azimuth distance compensation factor based on the Maclalin approximate distance;
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
and obtaining a pitching direction distance compensation factor based on the Maclalin approximate distance.
In combination with the foregoing examples, in some implementations, the error analysis module of the present disclosure may be further configured to:
Combining the azimuth distance compensation factor and the pitching distance compensation factor, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
and solving the distance error generated by the distance process from any point of the observation area to the radar sampling point in a stepping way.
In combination with the foregoing examples, in some implementations, the imaging module of the present disclosure may be further configured to:
compressing echo signals at all sampling points of the radar along the distance direction;
dividing the imaging region into a plurality of pixel regions, each pixel region having a plurality of pixels, comprising: dividing an observation area into a plurality of pixels according to the obtained radar resolution, so that the size of each pixel along the distance direction, the azimuth direction and the pitching direction is respectively smaller than the radar distance direction, the azimuth direction and the pitching direction resolution;
the method for calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iterative method comprises the following steps: calculating initial pixel point phase and phase compensation factors based on the near distance of an observation area, the distance direction size of a pixel, the radar center frequency, the stepping initial value and the number of distance direction pixels; iteratively calculating the phase of the pixel points along the distance direction until the stepping initial value is larger than the number of the pixels along the distance direction; outputting a pixel point phase matrix along the distance direction; iteratively calculating phases of pixel points located at the same radial distance based on the expansion of the pixel point phase matrix along the distance direction;
Calculating the size of a sampling point area according to the step-by-step distance solving error, and dividing the spherical synthetic aperture into a plurality of sampling point areas, wherein the method comprises the following steps: selecting an observation area pixel point and a radar initial sampling point; calculating the distance course from the pixel point of the observation area to the radar initial sampling point; calculating an azimuth distance compensation factor coefficient and a pitching distance compensation factor coefficient; solving maxwell Lin Jinshi of the distance history; solving an actual distance course; calculating the size of the sampling point region and the number of the divided regions, and calculating the sampling point region according to the phase error limiting condition;
calculating the value of each pixel point of the observation area point by point comprises the following steps: and reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
Specifically, one of the inventive concepts of the present disclosure is directed to forming a synthetic aperture or real aperture radar with apertures located on the same sphere, at least by azimuth-elevation rotation; calculating based on a stepping distance compensation factor, and analyzing a distance error; based on the region division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to finish three-dimensional reconstruction of the observation region, thereby realizing the region division of the sampling points of the spherical synthetic aperture radar, and in each region, the distance from the pixel points of the observation region to the sampling points is calculated in a stepping iterative mode, so that root index operation in the process of distance calculation is reduced, and the algorithm efficiency is improved; the specific method steps for calculating the distance compensation factor are deduced, meanwhile, the distance error caused by the step-by-step distance iteration solving distance process is analyzed, and the maximum range of sampling point region division is calculated according to the phase error condition; according to the specificity of the imaging area, the method for iteratively solving the pixel point phase of the observation area is provided, root index operation during the pixel point phase solving is reduced, and the algorithm efficiency is further improved.
As one aspect of the disclosure, the disclosure further provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, principally implement a method for spherical aperture zoned progressive phase iterative imaging according to the above, comprising at least:
forming a synthetic aperture or a real aperture radar with apertures positioned on the same spherical surface through azimuth-pitching rotation;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the regional division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, so that the three-dimensional reconstruction of the observation region is completed.
In some embodiments, the executing computer-executable instructions processor can be a processing device including more than one general purpose processing device, such as a microprocessor, central Processing Unit (CPU), graphics Processing Unit (GPU), or the like. More specifically, the processor may be a Complex Instruction Set Computing (CISC) microprocessor, a Reduced Instruction Set Computing (RISC) microprocessor, a Very Long Instruction Word (VLIW) microprocessor, a processor running other instruction sets, or a processor running a combination of instruction sets. The processor may also be one or more special purpose processing devices such as an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), a system on a chip (SoC), or the like.
In some embodiments, the computer readable storage medium may be memory, such as read-only memory (ROM), random-access memory (RAM), phase-change random-access memory (PRAM), static random-access memory (SRAM), dynamic random-access memory (DRAM), electrically erasable programmable read-only memory (EEPROM), other types of random-access memory (RAM), flash memory disk or other forms of flash memory, cache, registers, static memory, compact disk read-only memory (CD-ROM), digital Versatile Disk (DVD) or other optical storage, magnetic cassettes or other magnetic storage devices, or any other possible non-transitory medium which can be used to store information or instructions that can be accessed by a computer device, and the like.
In some embodiments, the computer-executable instructions may be implemented as a plurality of program modules which collectively implement the spherical aperture zoned progressive phase iterative imaging method according to any of the present disclosure.
The present disclosure describes various operations or functions that may be implemented or defined as software code or instructions. The display unit may be implemented as software code or instruction modules stored on a memory that when executed by a processor may implement the corresponding steps and methods.
Such content may be source code or differential code ("delta" or "patch" code) that may be executed directly ("object" or "executable" form). The software implementations of the embodiments described herein may be provided by an article of manufacture having code or instructions stored thereon or by a method of operating a communication interface to transmit data over the communication interface. The machine or computer-readable storage medium may cause a machine to perform the described functions or operations and includes any mechanism for storing information in a form accessible by the machine (e.g., computing display device, electronic system, etc.), such as recordable/non-recordable media (e.g., read Only Memory (ROM), random Access Memory (RAM), magnetic disk storage media, optical storage media, flash memory display device, etc.). The communication interface includes any mechanism for interfacing with any of a hard-wired, wireless, optical, etc. media to communicate with other display devices, such as a memory bus interface, a processor bus interface, an internet connection, a disk controller, etc. The communication interface may be configured by providing configuration parameters and/or sending signals to prepare the communication interface to provide data signals describing the software content. The communication interface may be accessed by sending one or more commands or signals to the communication interface.
The computer-executable instructions of embodiments of the present disclosure may be organized into one or more computer-executable components or modules. Aspects of the disclosure may be implemented with any number and combination of such components or modules. For example, aspects of the present disclosure are not limited to the specific computer-executable instructions or the specific components or modules illustrated in the figures and described herein. Other embodiments may include different computer-executable instructions or components having more or less functionality than illustrated and described herein.
The above description is intended to be illustrative and not restrictive. For example, the above-described examples (or one or more aspects thereof) may be used in combination with each other. For example, other embodiments may be used by those of ordinary skill in the art upon reading the above description. In addition, in the above detailed description, various features may be grouped together to streamline the disclosure. This is not to be interpreted as an intention that the disclosed features not being claimed are essential to any claim. Rather, the disclosed subject matter may include less than all of the features of a particular disclosed embodiment. Thus, the following claims are hereby incorporated into the detailed description as examples or embodiments, with each claim standing on its own as a separate embodiment, and it is contemplated that these embodiments may be combined with one another in various combinations or permutations. The scope of the disclosure should be determined with reference to the appended claims, along with the full scope of equivalents to which such claims are entitled.
The above embodiments are merely exemplary embodiments of the present disclosure, which are not intended to limit the present disclosure, the scope of which is defined by the claims. Various modifications and equivalent arrangements of parts may be made by those skilled in the art, which modifications and equivalents are intended to be within the spirit and scope of the present disclosure.
Claims (7)
1. The method for spherical aperture zonal progressive phase iterative imaging comprises the following steps:
forming a synthetic aperture or a real aperture radar with apertures positioned on the same spherical surface through azimuth-pitching rotation;
calculating based on a stepping distance compensation factor, and analyzing a distance error;
based on the regional division of sampling points of the spherical aperture radar, the distance process from any point of an observation region to the sampling points is obtained through iterative calculation of a stepping distance compensation factor so as to finish the three-dimensional reconstruction of the observation region;
the step-based distance compensation factor calculation, performing distance error analysis, includes:
obtaining an azimuth distance compensation factor;
obtaining a pitching distance compensation factor;
acquiring smaller azimuth stepping angles and pitch stepping angles according to the azimuth distance compensation factors and the pitch distance compensation factors so as to reduce distance errors;
Wherein, the liquid crystal display device comprises a liquid crystal display device,
the obtaining the azimuth distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
obtaining an azimuth distance compensation factor based on the Maclalin approximate distance;
the obtaining the pitching distance compensation factor comprises the following steps:
setting the number of steps of a radar sampling point along the azimuth direction and the number of steps of a radar initial sampling point along the pitching direction, and obtaining a Maclalin expression of the radar sampling point;
obtaining an approximate distance of Maclalin based on the processing of the Maclalin expression;
obtaining a pitching direction distance compensation factor based on the Maclalin approximate distance;
wherein, carry out the distance error analysis, include:
combining the azimuth distance compensation factor and the pitching distance compensation factor, and solving the distance course from any point of the observation area to the radar sampling point in a stepping way based on the distance from any point of the observation area to the radar initial sampling point;
step-by-step solving a distance error generated by the distance process from any point of the observation area to the radar sampling point;
when the observation area point is positioned at the radar initial sampling point and the extension line of the sampling point, the error of the step-by-step solving distance course is maximum.
2. The method of claim 1, wherein the spherical aperture zoned nonlinear progressive phase iterative imaging method comprises:
compressing echo signals at all sampling points of the radar along the distance direction;
dividing an imaging area into a plurality of pixel areas, wherein each pixel area is provided with a plurality of pixels;
calculating the phase of each pixel point in an imaging area by adopting a zonal stepping phase iteration method;
calculating the size of a sampling point region according to the step-by-step distance solving error, and dividing the spherical synthetic aperture into a plurality of sampling point regions;
the value of each pixel point of the observation area is calculated point by point.
3. The method of claim 2, wherein the dividing the imaging region into a number of pixel regions, each pixel region having a number of pixels, comprises:
according to the obtained radar resolution, the observation area is divided into a plurality of pixels, so that the size of each pixel along the distance direction, the azimuth direction and the pitching direction is respectively smaller than the radar distance direction, the azimuth direction and the pitching direction resolution.
4. A method according to claim 3, wherein the calculating the phase of each pixel point in the imaging region by using a split-region step-and-step phase iterative method comprises:
Calculating initial pixel point phase and phase compensation factors based on the near distance of an observation area, the distance direction size of a pixel, the radar center frequency, the stepping initial value and the number of distance direction pixels;
iteratively calculating the phase of the pixel points along the distance direction until the stepping initial value is larger than the number of the pixels along the distance direction;
outputting a pixel point phase matrix along the distance direction;
based on the expansion of the pixel point phase matrix along the distance direction, the phase of the pixel points located at the same radial distance is calculated iteratively.
5. The method of claim 4, wherein calculating the sample point region size from the step-wise range solution error and dividing the spherical synthetic aperture into a number of sample point regions comprises:
selecting an observation area pixel point and a radar initial sampling point;
calculating the distance course from the pixel point of the observation area to the radar initial sampling point;
calculating an azimuth distance compensation factor coefficient and a pitching distance compensation factor coefficient;
solving maxwell Lin Jinshi of the distance history;
solving an actual distance course;
and calculating the size of the sampling point region and the number of the dividing regions, and calculating the sampling point region according to the phase error limiting condition.
6. The method of claim 5, wherein calculating the value of each pixel of the observation region point by point comprises:
And reconstructing each pixel point of the observation area point by point based on an algorithm so as to realize three-dimensional resolution imaging of the observation area.
7. The device is used for forming a synthetic aperture or a real aperture radar with apertures positioned on the same sphere through azimuth-pitching rotation; the device comprises:
an error analysis module configured to perform a distance error analysis based on a step-wise distance compensation factor calculation as described in the method of claim 1;
the imaging module is configured to be used for carrying out area division on sampling points of the spherical aperture radar, so that the distance process from any point of an observation area to the sampling points is obtained through iterative calculation of a stepping distance compensation factor, and three-dimensional reconstruction of the observation area is completed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011609245.6A CN112835040B (en) | 2020-12-30 | 2020-12-30 | Spherical aperture zoned progressive phase iterative imaging method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011609245.6A CN112835040B (en) | 2020-12-30 | 2020-12-30 | Spherical aperture zoned progressive phase iterative imaging method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112835040A CN112835040A (en) | 2021-05-25 |
CN112835040B true CN112835040B (en) | 2023-05-23 |
Family
ID=75925391
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011609245.6A Active CN112835040B (en) | 2020-12-30 | 2020-12-30 | Spherical aperture zoned progressive phase iterative imaging method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112835040B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3950747A (en) * | 1975-02-28 | 1976-04-13 | International Telephone & Telegraph Corporation | Optical processing system for synthetic aperture radar |
US4924229A (en) * | 1989-09-14 | 1990-05-08 | The United States Of America As Represented By The United States Department Of Energy | Phase correction system for automatic focusing of synthetic aperture radar |
WO1998009134A1 (en) * | 1996-08-29 | 1998-03-05 | Washington University | Method and apparatus for generating a three-dimensional topographical image of a microscopic specimen |
JP2001074832A (en) * | 1999-09-02 | 2001-03-23 | Mitsubishi Electric Corp | Radar device |
JP2004198275A (en) * | 2002-12-19 | 2004-07-15 | Mitsubishi Electric Corp | Synthetic aperture radar system, and image reproducing method |
JP2007256058A (en) * | 2006-03-23 | 2007-10-04 | Mitsubishi Electric Corp | Radar image processing apparatus |
JP2009519436A (en) * | 2005-11-09 | 2009-05-14 | キネティック リミテッド | Passive detection device |
JP2012189445A (en) * | 2011-03-10 | 2012-10-04 | Panasonic Corp | Object detection device and object detection method |
EP2759847A1 (en) * | 2014-01-08 | 2014-07-30 | Institute of Electronics, Chinese Academy of Sciences | Method and apparatus for determining equivalent velocity |
EP3144702A1 (en) * | 2015-09-17 | 2017-03-22 | Institute of Electronics, Chinese Academy of Sciences | Method and device for synthethic aperture radar imaging based on non-linear frequency modulation signal |
CN109597076A (en) * | 2018-12-29 | 2019-04-09 | 内蒙古工业大学 | Data processing method and device for ground synthetic aperture radar |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5337002A (en) * | 1991-03-01 | 1994-08-09 | Mercer John E | Locator device for continuously locating a dipole magnetic field transmitter and its method of operation |
US6008651A (en) * | 1991-03-01 | 1999-12-28 | Digital Control, Inc. | Orientation sensor arrangement and method for use in a system for monitoring the orientation of an underground boring tool |
DE4427657C2 (en) * | 1994-08-05 | 1996-10-24 | Deutsche Forsch Luft Raumfahrt | Process for image generation by means of two-dimensional data processing on a radar with a synthetic aperture |
CN101983034B (en) * | 2008-03-31 | 2013-02-13 | 皇家飞利浦电子股份有限公司 | Fast tomosynthesis scanner apparatus and ct-based method based on rotational step-and-shoot image acquisition without focal spot motion during continuous tube movement for use in cone-beam volume ct mammography imaging |
US8193967B2 (en) * | 2008-12-10 | 2012-06-05 | The United States Of America As Represented By The Secretary Of The Army | Method and system for forming very low noise imagery using pixel classification |
WO2010119447A1 (en) * | 2009-04-16 | 2010-10-21 | Doron Shlomo | Imaging system and method |
CN101900812B (en) * | 2009-05-25 | 2012-11-14 | 中国科学院电子学研究所 | Three-dimensional imaging method in widefield polar format for circular synthetic aperture radar |
US9041585B2 (en) * | 2012-01-10 | 2015-05-26 | Raytheon Company | SAR autofocus for ground penetration radar |
CN106054183A (en) * | 2016-04-29 | 2016-10-26 | 深圳市太赫兹科技创新研究院有限公司 | Three-dimensional image reconstruction method and device based on synthetic aperture radar imaging |
CN106556874B (en) * | 2016-10-31 | 2018-10-23 | 华讯方舟科技有限公司 | A kind of short distance microwave imaging method and system |
CN106772369B (en) * | 2016-12-16 | 2019-04-30 | 北京华航无线电测量研究所 | A kind of three-D imaging method based on multi-angle of view imaging |
CN107238866B (en) * | 2017-05-26 | 2019-05-21 | 西安电子科技大学 | Millimeter wave video imaging system and method based on synthetic aperture technique |
CN108387896B (en) * | 2018-01-03 | 2020-07-07 | 厦门大学 | Automatic convergence imaging method based on ground penetrating radar echo data |
CN108732555B (en) * | 2018-06-04 | 2022-05-27 | 内蒙古工业大学 | Automatic driving array microwave imaging motion compensation method |
AU2018214017A1 (en) * | 2018-08-07 | 2020-02-27 | The University Of Melbourne | Quantum Spin Magnetometer |
CN110793543B (en) * | 2019-10-21 | 2023-06-13 | 国网电力科学研究院有限公司 | Positioning navigation precision measuring device and method of electric power inspection robot based on laser scanning |
CN110837127B (en) * | 2019-11-26 | 2021-09-10 | 内蒙古工业大学 | Sparse antenna layout method based on cylindrical radar imaging device |
CN110837128B (en) * | 2019-11-26 | 2021-09-10 | 内蒙古工业大学 | Imaging method of cylindrical array radar |
CN111142164B (en) * | 2019-11-26 | 2022-07-05 | 内蒙古工业大学 | Cylindrical radar imaging system |
-
2020
- 2020-12-30 CN CN202011609245.6A patent/CN112835040B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3950747A (en) * | 1975-02-28 | 1976-04-13 | International Telephone & Telegraph Corporation | Optical processing system for synthetic aperture radar |
US4924229A (en) * | 1989-09-14 | 1990-05-08 | The United States Of America As Represented By The United States Department Of Energy | Phase correction system for automatic focusing of synthetic aperture radar |
WO1998009134A1 (en) * | 1996-08-29 | 1998-03-05 | Washington University | Method and apparatus for generating a three-dimensional topographical image of a microscopic specimen |
JP2001074832A (en) * | 1999-09-02 | 2001-03-23 | Mitsubishi Electric Corp | Radar device |
JP2004198275A (en) * | 2002-12-19 | 2004-07-15 | Mitsubishi Electric Corp | Synthetic aperture radar system, and image reproducing method |
JP2009519436A (en) * | 2005-11-09 | 2009-05-14 | キネティック リミテッド | Passive detection device |
JP2007256058A (en) * | 2006-03-23 | 2007-10-04 | Mitsubishi Electric Corp | Radar image processing apparatus |
JP2012189445A (en) * | 2011-03-10 | 2012-10-04 | Panasonic Corp | Object detection device and object detection method |
EP2759847A1 (en) * | 2014-01-08 | 2014-07-30 | Institute of Electronics, Chinese Academy of Sciences | Method and apparatus for determining equivalent velocity |
EP3144702A1 (en) * | 2015-09-17 | 2017-03-22 | Institute of Electronics, Chinese Academy of Sciences | Method and device for synthethic aperture radar imaging based on non-linear frequency modulation signal |
CN109597076A (en) * | 2018-12-29 | 2019-04-09 | 内蒙古工业大学 | Data processing method and device for ground synthetic aperture radar |
Non-Patent Citations (1)
Title |
---|
基于冲激体制的CSAR波数域三维成像算法;黎向阳;张子善;张汉华;李杨寰;;计算机仿真(第10期);228-232 * |
Also Published As
Publication number | Publication date |
---|---|
CN112835040A (en) | 2021-05-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109493407B (en) | Method and device for realizing laser point cloud densification and computer equipment | |
EP3506161A1 (en) | Method and apparatus for recovering point cloud data | |
WO2018127007A1 (en) | Depth image acquisition method and system | |
CN112288853B (en) | Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium | |
US20190096050A1 (en) | Method and device for three-dimensional reconstruction | |
CN103438826B (en) | The three-dimension measuring system of the steel plate that laser combines with vision and method | |
CN112444798B (en) | Method and device for calibrating space-time external parameters of multi-sensor equipment and computer equipment | |
CN114494388B (en) | Three-dimensional image reconstruction method, device, equipment and medium in large-view-field environment | |
CN105513078A (en) | Standing tree information acquisition method and device based on images | |
CN103839238A (en) | SAR image super-resolution method based on marginal information and deconvolution | |
CN115439528B (en) | Method and equipment for acquiring image position information of target object | |
CN112734910A (en) | Real-time human face three-dimensional image reconstruction method and device based on RGB single image and electronic equipment | |
CN103076608B (en) | Contour-enhanced beaming-type synthetic aperture radar imaging method | |
CN114078145A (en) | Blind area data processing method and device, computer equipment and storage medium | |
CN113610975B (en) | Quasi-three-dimensional map generation and coordinate conversion method | |
CN112835039B (en) | Planar aperture zoned nonlinear progressive phase iterative imaging method and device | |
CN112799064B (en) | Cylindrical aperture nonlinear progressive phase iterative imaging method and device | |
CN112835040B (en) | Spherical aperture zoned progressive phase iterative imaging method and device | |
CN115542320A (en) | Rapid real-time sub-aperture imaging method and device for ground-based synthetic aperture radar | |
CN104574428A (en) | SAR image incident angle estimation method | |
CN108253931B (en) | Binocular stereo vision ranging method and ranging device thereof | |
US11403815B2 (en) | Gridding global data into a minimally distorted global raster | |
CN108959680B (en) | Ice and snow forward scattering correction method based on nuclear-driven bidirectional reflection distribution function model | |
CN103630121A (en) | Linear array image differential rectification method based on optimal scanning line rapid positioning | |
CN114414065B (en) | Object temperature detection method, device, computer equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |