CN113835222A - Curved surface array rapid imaging method - Google Patents

Curved surface array rapid imaging method Download PDF

Info

Publication number
CN113835222A
CN113835222A CN202111264008.5A CN202111264008A CN113835222A CN 113835222 A CN113835222 A CN 113835222A CN 202111264008 A CN202111264008 A CN 202111264008A CN 113835222 A CN113835222 A CN 113835222A
Authority
CN
China
Prior art keywords
array
imaging
phase
image
weighting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111264008.5A
Other languages
Chinese (zh)
Inventor
张继龙
张继康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Weimo Electronic Information Technology Co ltd
Original Assignee
Suzhou Weimo Electronic Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Weimo Electronic Information Technology Co ltd filed Critical Suzhou Weimo Electronic Information Technology Co ltd
Priority to CN202111264008.5A priority Critical patent/CN113835222A/en
Publication of CN113835222A publication Critical patent/CN113835222A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0012Optical design, e.g. procedures, algorithms, optimisation routines

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention relates to the technical fields of optical imaging, microwave imaging, radar detection, sonar, ultrasonic imaging, target detection based on media such as sound, light and electricity, imaging identification and wireless communication, in particular to a curved surface array rapid imaging method and application thereof in the fields. The method can realize the rapid imaging of the curved array, is suitable for various non-planar arrays and conformal arrays, and can realize the rapid imaging and the target detection. In addition, the method has the advantages of good compatibility, small operand, good imaging effect, wide application range and the like.

Description

Curved surface array rapid imaging method
Technical Field
The invention relates to the technical fields of optical imaging, microwave imaging, radar detection, sonar, ultrasonic imaging, target detection based on media such as sound, light and electricity, imaging identification and wireless communication, in particular to a curved surface array rapid imaging method and application thereof in the fields.
Background
In the field of target detection and wireless communication, especially in the radar detection technology, curved surface arrays are widely used, for example, many ground-based radars employ spherical surface arrays or cylindrical surface arrays, while in shipboard and airborne radars, conformal arrays are more employed, and these non-planar arrays may be collectively referred to as curved surface arrays. For the curved array, the detection efficiency can be greatly improved by adopting the imaging detection technology.
The invention discloses a rapid imaging method suitable for passive imaging and active imaging and a cylindrical scanning microwave imaging method, which are applied by the inventor before, and are only suitable for a plane array and a cylindrical array and cannot be applied to a general curved array. Therefore, a fast imaging technology suitable for a common curved surface array needs to be developed to solve the problem of fast imaging of the common curved surface array.
Disclosure of Invention
The invention provides a set of solution for applying virtual lens imaging technology to curved array imaging.
As shown in fig. 1, a coordinate system of the curved array imaging system is established, where P is the target, Q is the image of the target, and the center of the curved array is located on a plane where z is 0.
Without loss of generality, the distance error from the curved array unit to the ideal array plane is assumed to be deltazThe ideal array plane is a plane which passes through the center of the curved array and is perpendicular to the normal direction of the curved array. The coordinates of the array elements are represented by (x, y, z), and there is Δz=z,z>Time 0 ΔzIs a positive value, z<Time 0 ΔzIs negative.
The signal passes through a single pass R1、R2The propagation phase shift introduced during propagation is:
Figure BDA0003325982640000021
wherein: phi is a1Is the propagation phase shift, phi, of the scattering source P to the array elements2For the propagation phase shift of the array element to the image point Q,
Figure BDA0003325982640000022
is the wave number, U is the object distance, V is the image distance, (ζ, ξ) are the scattering source coordinates, (x, y, z) are the array element coordinates, and (δ, σ) are the image point coordinates.
Neglect of phi1、φ2The constant component that does not contribute to imaging, only the coordinate-dependent variation component that is useful for imaging focus is taken:
Figure BDA0003325982640000023
by equating the curved array to a lens with focal length F, the effective phase shift of the lens unit is:
Figure BDA0003325982640000024
wherein: phi is aLF is the focal length for the lens phase shift of the array element.
In passive imaging, the antenna unit does not emit detection signals and is only used for receiving scattering signals of a target, and after receiving the scattering signals of the target, the antenna performs secondary scattering in the form of spherical waves and passes through different transmission paths R1、R2And the field intensity reaching the image plane after phase shifting of the lens unit is as follows:
Figure BDA0003325982640000025
in holographic imaging, a signal is transmitted from an antenna unit, is reflected to a target and then is received by the antenna unit, and the distance traveled by the signal is R1Corresponding to a phase delay of 2 phi1. In the imaging process, it is necessary to shift the phase of the lens unit, R2The propagation phase shift is processed in two passes: the receiving and transmitting antenna units sequentially transmit detection signals, and the signals reflected by the target P are subjected to secondary scattering in the form of spherical waves after reaching the receiving and transmitting antenna units and then pass through different transmission paths R1、R2The field strength at the image plane after the two-way phase shift is:
Figure BDA0003325982640000026
comparing the imaging equations in the two cases, by introducing the auxiliary object selectivity parameter η, the two equations can be unified as follows:
Figure BDA0003325982640000031
where η ═ 1 is suitable for passive imaging and η ═ 2 is suitable for active holographic imaging.
In actual imaging, only the following processing needs to be performed:
Figure BDA0003325982640000032
the target signal received by the antenna unit is E, and A is an amplitude weighting coefficient. Substitution of phiLAnd phi2The expression is simplified to obtain:
Figure BDA0003325982640000033
wherein the content of the first and second substances,
Figure BDA0003325982640000034
φC=ηkΔz
when the imaging condition is satisfied
Figure BDA0003325982640000035
In time, the above equation can be simplified as:
Figure BDA0003325982640000036
for practical discrete systems, use (x)m,yn,zmn) Representing the coordinates of the array element, the error in the distance of the array element to the ideal array plane is
Figure BDA00033259826400000314
Wherein m and n are array unit serial numbers in the x direction and the y direction respectively. Discretizing the above formula to obtain:
Figure BDA0003325982640000037
wherein the notation exp denotes an exponential function with the Euler constant e as base,
Figure BDA0003325982640000038
for signals received by the array elements, AmnIs a weighting coefficient for the array element amplitude,
Figure BDA0003325982640000039
the phase weighting coefficients are focused for the array elements,
Figure BDA00033259826400000310
compensating the coefficient for the array unit phase;
Figure BDA00033259826400000311
Figure BDA00033259826400000312
let xm=x0+mΔx,yn=y0+nΔyThe formula is simplified and arranged as follows:
Figure BDA00033259826400000313
wherein the content of the first and second substances,
Figure BDA0003325982640000041
the right coefficient of the above formula satisfies
Figure BDA0003325982640000042
The spatial fluctuation characteristic of an image field is reflected, and the influence on imaging is basically avoided and can be ignored. Regardless of the influence of the coefficients, the summation operation can be solved by a fast algorithm represented by Inverse Fast Fourier Transform (IFFT), and the image field calculation formula is:
Figure BDA0003325982640000043
wherein the content of the first and second substances,
Figure BDA0003325982640000044
is a picture field, a symbol
Figure BDA0003325982640000045
Represents an efficient fast algorithm function, which may be implemented using an Inverse Fast Fourier Transform (IFFT) or a Fast Fourier Transform (FFT);
Figure BDA0003325982640000046
for signals received by the array, A is the array amplitude weighting factor, phiFFor array focusing phase weighting coefficients, phiCFor the array phase compensation coefficient, phiSThe phase weighting coefficients are scanned for the array.
The spatial spectrum omega corresponding to the above calculation resultδ、ωσThe value range is as follows: omegaδ∈[0,2π]、ωσ∈[0,2π]After fftshift operation, the value of ω is calculatedδ、ωσThe value range is transformed into: omegaδ∈[-π,π]、ωσ∈[-π,π]The image at this time is the image which is in accordance with the actual distribution and has a good linear mapping relation with the source field,
Figure BDA0003325982640000047
wherein the content of the first and second substances,
Figure BDA0003325982640000048
to conform to the actual distribution of the image.
Combining array antenna theory to space spectrum omegaδ、ωσMaking a correction of omegaδ=ηkΔxsinθδ、ωσ=ηkΔysinθσ
Since the repetition period of the discrete FFT is 2 pi, if no image aliasing occurs, there should be:
|ω|≤π;
let the cell pitch be Δ, so that there is:
Figure BDA0003325982640000049
typically, the effective range of θ is [ - π/2, π/2], and the conditions that ensure that the above equation is always true are:
Figure BDA00033259826400000410
the half-airspace image aliasing-free condition is as follows:
Figure BDA00033259826400000411
correcting the scanning angular coordinate of the image point by adopting an array antenna theory:
Figure BDA0003325982640000051
the target slant distance R is used for replacing an object distance parameter U, so that the imaging performance under the condition of a large angle can be improved:
Figure BDA0003325982640000052
on the basis of the above knowledge, the invention provides a curved array rapid imaging method, which is based on a lens imaging principle, combines an electromagnetic field theory, and obtains image field distribution corresponding to a target by using an efficient parallel algorithm according to target signals received by an antenna array through amplitude and phase weighting of unit signals, wherein the specific algorithm is as follows:
Figure BDA0003325982640000053
wherein: j is an imaginary unit, e is an Euler constant,
Figure BDA0003325982640000054
in order to be the image field distribution,
Figure BDA0003325982640000055
for the target signal received by the array unit, AmnIs a weighting coefficient for the array element amplitude,
Figure BDA0003325982640000056
the phase weighting coefficients are focused for the array elements,
Figure BDA0003325982640000057
for the phase compensation coefficients of the array elements,
Figure BDA0003325982640000058
scanning the array elements with phase weighting coefficients, M being the number of array elements in the x-direction and N being the number of array elements in the y-direction, (x)m,yn) Is the coordinate of the array unit, (delta, sigma) is the coordinate of the image point, V is the image distance, i.e. the distance between the image plane and the array plane, eta is the object selectivity parameter, different values are selected according to the characteristics of the imaging system, m and n are the serial numbers of the array unit in the x direction and the y direction respectively,
Figure BDA0003325982640000059
in wavenumber, λ is the wavelength, and the symbol Σ represents the summation operation.
Further, the method of the present invention is applicable to different imaging systems by selecting different values of parameter η, specifically:
when the eta is 1, the method is suitable for a passive imaging system, a semi-active imaging system and a conventional phased array system;
when η is 2, the method is suitable for an active holographic imaging system and a synthetic aperture imaging system.
Further, the method of the invention comprises the following steps:
the method comprises the following steps: carrying out amplitude weighting on the array unit signals to reduce side lobe levels;
step two: carrying out focusing phase weighting on the array unit signals to realize imaging focusing;
step three: performing curved surface array phase compensation on the array unit signals to improve imaging performance;
step four: carrying out beam scanning phase weighting on the array unit signals to adjust the central visual angle direction of the imaging system;
step five: performing rapid imaging processing on the array unit signals by adopting an efficient parallel algorithm;
step six: and resolving the image field coordinates, and performing coordinate inversion on the image field to obtain the position of the real target.
Further, the amplitude weighting method in step one of the method of the present invention includes, but is not limited to, uniform distribution, cosine weighting, hamming window, Taylor distribution, chebyshev distribution, and hybrid weighting method.
Further, in step two of the method of the present invention, the focusing phase weighting is performed on the array unit signals to realize the imaging focusing, wherein:
the autofocus phase weighted focus phase calculation formula is:
Figure BDA0003325982640000061
wherein R is the target slant distance, namely the distance from the target to the center of the array;
the zoom or fixed focus phase weighted focus phase calculation formula is:
Figure BDA0003325982640000062
wherein F is the focal length, V is the image distance, i.e. the distance from the image plane to the plane of the receiving array, and F < U, F < V.
Further, the method of the present invention performs the curved surface array phase compensation on the array unit signal in step three to improve the imaging performance, wherein:
the calculation formula of the phase compensation coefficient of the array unit is as follows:
Figure BDA0003325982640000063
wherein the content of the first and second substances,
Figure BDA0003325982640000064
and the distance error of the curved array unit from an ideal array plane is shown, wherein the ideal array plane is a plane which passes through the center of the curved array and is perpendicular to the normal direction of the curved array.
Furthermore, in the fourth step of the method of the present invention, the scanning phase weighting adjusts the central view direction of the imaging system, and the phase calculation formula of the scanning phase weighting is as follows:
Figure BDA0003325982640000071
wherein:
Figure BDA0003325982640000072
the phase difference between the adjacent cells of the array in the x direction and the y direction respectively has the following calculation formula:
Figure BDA0003325982640000073
Figure BDA0003325982640000074
wherein: deltaxArray cell spacing, Δ, in the x-directionyArray cell pitch in the y-direction, θζ、θξThe x and y scanning angle coordinates when the central visual angle direction points to the source coordinates (zeta, xi) are respectively calculated as follows:
Figure BDA0003325982640000075
Figure BDA0003325982640000076
wherein: u is the object distance, i.e., the distance from the plane of the target to the plane of the array.
Furthermore, in the fifth step of the method, the high-efficiency parallel algorithm is adopted to carry out rapid imaging processing on the array unit signals; the efficient parallel algorithm comprises two-dimensional or three-dimensional FFT, IFFT, non-uniform FFT and sparse FFT, and the calculation formula is as follows:
Figure BDA0003325982640000077
wherein:
Figure BDA0003325982640000078
for an image field, the symbol F represents an efficient parallel algorithm function,
Figure BDA0003325982640000079
for signals received by the array, A is the array amplitude weighting factor, phiFFor array focusing phase weighting coefficients, phiCFor the array phase compensation coefficient, phiSScanning the array with phase weighting coefficients;
the spatial spectrum omega corresponding to the image field calculation resultδ、ωσThe value range is as follows: omegaδ∈[0,2π]、ωσ∈[0,2π]After fftshift operation, the value of ω is calculatedδ、ωσThe value range is transformed into: omegaδ∈[-π,π]、ωσ∈[-π,π]The image at this time is an image conforming to the actual distribution:
Figure BDA00033259826400000710
further, the method comprises the following steps: carrying out coordinate calculation on an image field obtained by the efficient parallel algorithm, and carrying out coordinate inversion on the image field to obtain the position of a real target; wherein:
for the efficient parallel algorithm of the IFFT class, the calculation formula of the angular coordinate of the image field scanning is as follows:
Figure BDA0003325982640000081
Figure BDA0003325982640000082
for the FFT-like efficient parallel algorithm, the calculation formula of the image field scanning angle coordinate is as follows:
Figure BDA0003325982640000083
Figure BDA0003325982640000084
the rectangular coordinate calculation formula of the image is as follows:
δ=Vtanθδ
σ=Vtanθσ
the coordinate inversion calculation formula of the real target is as follows:
Figure BDA0003325982640000085
Figure BDA0003325982640000086
further, the method of the present invention sets the distance between the transmitting and receiving antennas to satisfy:
Figure BDA0003325982640000087
to avoid image aliasing.
In addition, the curved surface array rapid imaging method can also be used for remote imaging, and comprises the following steps: if U is ∞, then phiF0, suitable for long distancesThe simplified formula for imaging is:
Figure BDA0003325982640000088
and calculating an image field by adopting the efficient parallel algorithm, and obtaining the target distribution condition in a wide visual angle range through one-time operation.
Description of the drawings: the parameters appearing in the formula are data of the m-th unit and the n-th unit designated by subscript mn, and are suitable for a discrete system; the data of the whole array is represented by the non-subscript mn, is not limited to a certain unit and is suitable for a continuous system; both are individual and global differences.
Meanwhile, the invention also relates to the application of the method in the fields of optical imaging, microwave imaging, radar detection, sonar, ultrasonic imaging, sound, light and electric target detection, imaging identification and wireless communication.
In conclusion, the curved surface array rapid imaging method has the following advantages:
1) creates a rapid imaging method suitable for a curved surface array
The invention realizes low-cost and quick curved array imaging, the operation amount of the method is far lower than that of an active holographic imaging system, a digital beam synthesis system and a traditional synthetic aperture system, hardware resources can be greatly saved, and the imaging speed is improved.
2) Establishing a curved surface array imaging method compatible with passive imaging and active imaging
By adopting the method, the passive imaging technology can be used for realizing the ultra-fast scanning of the target, when the suspected object is found, the active imaging technology can be used for observing the details of the object in detail, and the two imaging methods can share one signal processing system, thereby greatly reducing the hardware cost, improving the scanning speed and providing great convenience for practical application.
In addition, the method has good application prospect, can be widely applied to the technical field of target detection and wireless communication taking sound, light, electricity and the like as media, and when the detection media are electromagnetic waves, the technology is suitable for microwave imaging, radar detection, wireless communication, synthetic aperture radar and inverse synthetic aperture radar; when the detection medium is sound wave and ultrasonic wave, the technology is suitable for sonar, ultrasonic imaging and synthetic aperture sonar; when the detection medium is light, the technology is suitable for optical imaging and synthetic aperture optical imaging.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly described below, and it is obvious that the following drawings are only some embodiments described in the present invention, and it is obvious for those skilled in the art that other drawings can be obtained based on these drawings without inventive labor.
FIG. 1 is a schematic diagram of an imaging system coordinate system of the curved array imaging method of the present invention.
FIG. 2 is an algorithm block diagram of the curved array imaging method of the present invention.
FIG. 3 is a graph of the results of a curved array imaging simulation performed using the imaging method of the present invention, wherein: (a) is an electromagnetic simulation model, and (b) is a curved surface array imaging result.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be described in detail and completely with reference to the following embodiments and accompanying drawings. It is to be understood that the embodiments described are merely illustrative of some, but not all, of the present invention and that the invention may be embodied or carried out in various other specific forms, and that various modifications and changes in the details of the specification may be made without departing from the spirit of the invention.
Also, it should be understood that the scope of the invention is not limited to the particular embodiments described below; it is also to be understood that the terminology used in the examples is for the purpose of describing particular embodiments only, and is not intended to limit the scope of the present invention.
Example 1: a curved surface array rapid imaging method (refer to the attached figures 1-2) is based on a lens imaging principle, combines an electromagnetic field theory, and obtains image field distribution corresponding to a target by adopting an efficient parallel algorithm through amplitude and phase weighting of unit signals according to target signals received by an antenna array, wherein the specific algorithm is as follows:
Figure BDA0003325982640000101
wherein: j is an imaginary unit, e is an Euler constant,
Figure BDA0003325982640000102
in order to be the image field distribution,
Figure BDA0003325982640000103
for the target signal received by the array unit, AmnIs a weighting coefficient for the array element amplitude,
Figure BDA0003325982640000104
the phase weighting coefficients are focused for the array elements,
Figure BDA0003325982640000105
for the phase compensation coefficients of the array elements,
Figure BDA0003325982640000106
scanning the array elements with phase weighting coefficients, M being the number of array elements in the x-direction and N being the number of array elements in the y-direction, (x)m,yn) The coordinates of the array unit are (delta, sigma) the coordinates of the image point, V the image distance, namely the distance from the image plane to the array plane, eta the object selectivity parameter, and different values are selected according to the characteristics of the imaging system (when eta is selected to be 1, the method is suitable for a passive imaging system, a semi-active imaging system and a conventional phased array system; when eta is 2, the method is suitable for an active holographic imaging system and a synthetic aperture imaging system), m and n are serial numbers of the array unit in the x direction and the y direction respectively,
Figure BDA0003325982640000107
is a waveThe number, λ is the wavelength and the sign ∑ represents the summation operation.
Specifically, the present imaging method includes the steps of:
the method comprises the following steps: carrying out amplitude weighting on the array unit signals to reduce side lobe levels;
methods of amplitude weighting include, but are not limited to, uniform distribution, cosine weighting, hamming window, Taylor distribution, chebyshev distribution, and hybrid weighting methods.
Step two: carrying out focusing phase weighting on the array unit signals to realize imaging focusing;
wherein: the autofocus phase weighted focus phase calculation formula is:
Figure BDA0003325982640000111
wherein R is the target slant distance, namely the distance from the target to the center of the array;
the zoom or fixed focus phase weighted focus phase calculation formula is:
Figure BDA0003325982640000112
wherein F is the focal length, V is the image distance, i.e. the distance from the image plane to the plane of the receiving array, and F < U, F < V.
Step three: performing curved surface array phase compensation on the array unit signals to improve imaging performance;
the calculation formula of the phase compensation coefficient of the array unit is as follows:
Figure BDA0003325982640000113
wherein the content of the first and second substances,
Figure BDA0003325982640000114
representing the distance error from the curved array unit to the ideal array plane passing through the curved arrayAnd a plane perpendicular to the normal direction of the curved array.
Step four: carrying out beam scanning phase weighting on the array unit signals to adjust the central visual angle direction of the imaging system;
the phase calculation formula of the scanning phase weighting is as follows:
Figure BDA0003325982640000115
wherein:
Figure BDA0003325982640000116
the phase difference between the adjacent cells of the array in the x direction and the y direction respectively has the following calculation formula:
Figure BDA0003325982640000117
Figure BDA0003325982640000118
wherein: deltaxArray cell spacing, Δ, in the x-directionyArray cell pitch in the y-direction, θζ、θξThe x and y scanning angle coordinates when the central visual angle direction points to the source coordinates (zeta, xi) are respectively calculated as follows:
Figure BDA0003325982640000121
Figure BDA0003325982640000122
wherein: u is the object distance, i.e., the distance from the plane of the target to the plane of the array.
Step five: performing rapid imaging processing on the array unit signals by adopting an efficient parallel algorithm;
the efficient parallel algorithm comprises two-dimensional or three-dimensional FFT, IFFT, non-uniform FFT and sparse FFT, and the calculation formula is as follows:
Figure BDA0003325982640000123
wherein:
Figure BDA0003325982640000124
is a picture field, a symbol
Figure BDA00033259826400001211
Represents an efficient parallel algorithm function and is,
Figure BDA0003325982640000125
for signals received by the array, A is the array amplitude weighting factor, phiFFor array focusing phase weighting coefficients, phiCFor the array phase compensation coefficient, phiSScanning the array with phase weighting coefficients;
the spatial spectrum omega corresponding to the image field calculation resultδ、ωσThe value range is as follows: omegaδ∈[0,2π]、ωσ∈[0,2π]After fftshift operation, the value of ω is calculatedδ、ωσThe value range is transformed into: omegaδ∈[-π,π]、ωσ∈[-π,π]The image at this time is an image conforming to the actual distribution:
Figure BDA0003325982640000126
step six: resolving an image field coordinate, and performing coordinate inversion on the image field to obtain the position of a real target;
wherein: for the efficient parallel algorithm of the IFFT class, the calculation formula of the angular coordinate of the image field scanning is as follows:
Figure BDA0003325982640000127
Figure BDA0003325982640000128
for the FFT-like efficient parallel algorithm, the calculation formula of the image field scanning angle coordinate is as follows:
Figure BDA0003325982640000129
Figure BDA00033259826400001210
the rectangular coordinate calculation formula of the image is as follows:
δ=Vtanθδ
σ=Vtanθσ
the coordinate inversion calculation formula of the real target is as follows:
Figure BDA0003325982640000131
Figure BDA0003325982640000132
in addition, the distance between the transmitting and receiving antennas is set to satisfy the following conditions:
Figure BDA0003325982640000133
to avoid image aliasing.
Example 2: test for verifying imaging Effect of the present imaging method (method of example 1)
The test conditions are as follows: the working frequency is 10GHz, the array is a hemispherical array with the radius of 1m, the unit interval is half wavelength, and the target is a V-shaped metal object, which is shown in the attached figure 3 (a); during the test, the V-shaped metal object is irradiated by plane waves, the field distribution of the array units is calculated, then the method is adopted to carry out the curved surface array imaging, and the imaging result is shown in the attached figure 3 (b).
Practice ofExample 3: a curved array fast imaging method for remote imaging, comprising: if U is ∞, then phiFA simplified formula suitable for long range imaging is 0:
Figure BDA0003325982640000134
the image field is calculated by adopting the efficient parallel algorithm in the method of the embodiment 1, and the distribution condition of all targets in a wide visual angle range can be obtained by one-time operation.
The embodiments of the present invention are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other.
The above description is only an example of the present invention, and is not intended to limit the present invention. Various modifications and alterations to this invention will become apparent to those skilled in the art. Any modification, replacement, or the like that comes within the spirit and principle of the present invention should be included in the scope of the claims of the present invention.

Claims (12)

1. A curved surface array rapid imaging method is characterized in that the method is based on a lens imaging principle, combines an electromagnetic field theory, and obtains image field distribution corresponding to a target by adopting an efficient parallel algorithm through amplitude and phase weighting of unit signals according to target signals received by an antenna array, wherein the specific algorithm is as follows:
Figure FDA0003325982630000011
wherein: j is an imaginary unit, e is an Euler constant,
Figure FDA0003325982630000012
in order to be the image field distribution,
Figure FDA0003325982630000013
for the target signal received by the array unit, AmnIs a weighting coefficient for the array element amplitude,
Figure FDA0003325982630000014
the phase weighting coefficients are focused for the array elements,
Figure FDA0003325982630000015
for the phase compensation coefficients of the array elements,
Figure FDA0003325982630000016
scanning the array elements with phase weighting coefficients, M being the number of array elements in the x-direction and N being the number of array elements in the y-direction, (x)m,yn) Is the coordinate of the array unit, (delta, sigma) is the coordinate of the image point, V is the image distance, i.e. the distance between the image plane and the array plane, eta is the object selectivity parameter, different values are selected according to the characteristics of the imaging system, m and n are the serial numbers of the array unit in the x direction and the y direction respectively,
Figure FDA0003325982630000017
in wavenumber, λ is the wavelength, and the symbol Σ represents the summation operation.
2. The method according to claim 1, characterized in that it is applicable to different imaging systems by selecting different values of parameter η, in particular:
when the eta is 1, the method is suitable for a passive imaging system, a semi-active imaging system and a conventional phased array system;
when eta is 2, the method is suitable for an active holographic imaging system and a synthetic aperture radar system.
3. Method according to claim 2, characterized in that it comprises the following steps:
the method comprises the following steps: carrying out amplitude weighting on the array unit signals to reduce side lobe levels;
step two: carrying out focusing phase weighting on the array unit signals to realize imaging focusing;
step three: performing curved surface array phase compensation on the array unit signals to improve imaging performance;
step four: carrying out beam scanning phase weighting on the array unit signals to adjust the central visual angle direction of the imaging system;
step five: performing rapid imaging processing on the array unit signals by adopting an efficient parallel algorithm;
step six: and resolving the image field coordinates, and performing coordinate inversion on the image field to obtain the position of the real target.
4. The method of claim 3, wherein the amplitude weighting method in step one comprises uniform distribution, cosine weighting, Hamming window, Taylor distribution, Chebyshev distribution and hybrid weighting method.
5. The method of claim 3, wherein the focusing phase weighting is performed on the array element signals in step two to realize imaging focusing, wherein:
the autofocus phase weighted focus phase calculation formula is:
Figure FDA0003325982630000021
wherein R is the target slant distance, namely the distance from the target to the center of the array;
the zoom or fixed focus phase weighted focus phase calculation formula is:
Figure FDA0003325982630000022
wherein F is the focal length, V is the image distance, i.e. the distance from the image plane to the plane of the receiving array, and F < U, F < V.
6. The method of claim 3, wherein the step three is a curved-surface array phase compensation for the array unit signals to improve imaging performance, wherein:
the calculation formula of the phase compensation coefficient of the array unit is as follows:
Figure FDA0003325982630000023
wherein the content of the first and second substances,
Figure FDA0003325982630000024
and the distance error of the curved array unit from an ideal array plane is shown, wherein the ideal array plane is a plane which passes through the center of the curved array and is perpendicular to the normal direction of the curved array.
7. The method of claim 3, wherein the scan phase weighting adjusts the central view direction of the imaging system in step four, and the phase calculation formula of the scan phase weighting is:
Figure FDA0003325982630000025
wherein:
Figure FDA0003325982630000026
the phase difference between the adjacent cells of the array in the x direction and the y direction respectively has the following calculation formula:
Figure FDA0003325982630000027
Figure FDA0003325982630000031
wherein: deltaxIs the array element pitch in the x-direction,Δyarray cell pitch in the y-direction, θζ、θξThe x and y scanning angle coordinates when the central visual angle direction points to the source coordinates (zeta, xi) are respectively calculated as follows:
Figure FDA0003325982630000032
Figure FDA0003325982630000033
wherein: u is the object distance, i.e., the distance from the plane of the target to the plane of the ideal array.
8. The method according to claim 3, wherein in step five, the array unit signals are subjected to fast imaging processing by using an efficient parallel algorithm; the efficient parallel algorithm comprises two-dimensional or three-dimensional FFT, IFFT, non-uniform FFT and sparse FFT, and the calculation formula is as follows:
Figure FDA0003325982630000034
wherein:
Figure FDA0003325982630000035
is a picture field, a symbol
Figure FDA0003325982630000036
Represents an efficient parallel algorithm function and is,
Figure FDA0003325982630000037
for a target signal received by the array, A is an array amplitude weighting factor, phiFFor array focusing phase weighting coefficients, phiCFor the array phase compensation coefficient, phiSScanning the array with phase weighting coefficients;
the above-mentioned image field meterSpatial spectrum omega corresponding to calculation resultδ、ωσThe value range is as follows: omegaδ∈[0,2π]、ωσ∈[0,2π]After fftshift operation, the value of ω is calculatedδ、ωσThe value range is transformed into: omegaδ∈[-π,π]、ωσ∈[-π,π]The image at this time is an image conforming to the actual distribution:
Figure FDA0003325982630000038
9. the method of claim 3, wherein step six comprises: carrying out coordinate calculation on an image field obtained by the efficient parallel algorithm, and carrying out coordinate inversion on the image field to obtain the position of a real target; wherein:
for the efficient parallel algorithm of the IFFT class, the calculation formula of the angular coordinate of the image field scanning is as follows:
Figure FDA0003325982630000039
Figure FDA00033259826300000310
for the FFT-like efficient parallel algorithm, the calculation formula of the image field scanning angle coordinate is as follows:
Figure FDA0003325982630000041
Figure FDA0003325982630000042
the rectangular coordinate calculation formula of the image is as follows:
δ=V tanθδ
σ=V tanθσ
the coordinate inversion calculation formula of the real target is as follows:
Figure FDA0003325982630000043
Figure FDA0003325982630000044
10. the method of claim 3, wherein the antenna spacing is set to satisfy:
Figure FDA0003325982630000045
Figure FDA0003325982630000046
to avoid image aliasing.
11. A curved-surface array rapid imaging method, which is used for remote imaging, comprises the following steps:
if U is ∞, then phiFA simplified formula suitable for long range imaging is 0:
Figure FDA0003325982630000047
the image field is calculated by adopting the efficient parallel algorithm according to claim 8, and the distribution condition of the targets in the wide visual angle range is obtained through one operation.
12. Use of the method according to any one of claims 1 to 11 in the fields of optical imaging, microwave imaging, radar detection, sonar, ultrasound imaging, and acoustic, optical, electrical object detection, image recognition, wireless communication.
CN202111264008.5A 2021-10-28 2021-10-28 Curved surface array rapid imaging method Pending CN113835222A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111264008.5A CN113835222A (en) 2021-10-28 2021-10-28 Curved surface array rapid imaging method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111264008.5A CN113835222A (en) 2021-10-28 2021-10-28 Curved surface array rapid imaging method

Publications (1)

Publication Number Publication Date
CN113835222A true CN113835222A (en) 2021-12-24

Family

ID=78966391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111264008.5A Pending CN113835222A (en) 2021-10-28 2021-10-28 Curved surface array rapid imaging method

Country Status (1)

Country Link
CN (1) CN113835222A (en)

Similar Documents

Publication Publication Date Title
CN113820711B (en) Array rapid imaging method and application thereof
CN113848546B (en) Rapid imaging method suitable for passive imaging and active imaging
CN108828593B (en) Random radiation radar correlation imaging method
CN108828603B (en) Cross-based sparse optimization method for three-dimensional imaging sonar array
CN109581388B (en) Near-field wide-view-angle beam forming method of real-time three-dimensional imaging sonar
CN113933834B (en) Cylindrical scanning microwave imaging method
CN113848547B (en) Digital holographic fast imaging method
CN110412587B (en) Deconvolution-based downward-looking synthetic aperture three-dimensional imaging method and system
CN111796279B (en) Passive electromagnetic vortex SAR (synthetic aperture radar) azimuth super-resolution imaging method and device
CN113917461B (en) MIMO radar imaging method and system
CN109884627B (en) Short-range millimeter wave rapid three-dimensional imaging method of any linear array configuration
Chi et al. High-resolution real-time underwater 3-D acoustical imaging through designing ultralarge ultrasparse ultra-wideband 2-D arrays
CN114002664A (en) Sum and difference beam imaging target detection and accurate angle measurement method
CN110554383B (en) MIMO annular array azimuth imaging method and device for microwave frequency band
CN113835222A (en) Curved surface array rapid imaging method
US11754973B2 (en) Fast imaging method suitable for passive imaging and active imaging
CN115201821A (en) Small target detection method based on strong target imaging cancellation
CN1773307A (en) Small size antenna array aperture expanding and space signal processing method
CN114994668A (en) Semi-holographic curved surface array rapid imaging method
CN113848552A (en) Three-dimensional image imaging method and device, storage medium and electronic equipment
CN114994666A (en) Semi-holographic cylindrical scanning imaging method
CN114994667A (en) Rapid imaging method of semi-holographic array
CN114966675B (en) MIMO array rapid imaging method based on distance compensation
Wang et al. Design of a low-complexity miniature underwater three-dimensional acoustical imaging system
CN117572435B (en) Deconvolution-based multi-beam synthetic aperture sonar high-resolution imaging method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination