CN116047463A - Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium - Google Patents
Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium Download PDFInfo
- Publication number
- CN116047463A CN116047463A CN202310339662.0A CN202310339662A CN116047463A CN 116047463 A CN116047463 A CN 116047463A CN 202310339662 A CN202310339662 A CN 202310339662A CN 116047463 A CN116047463 A CN 116047463A
- Authority
- CN
- China
- Prior art keywords
- angle
- dimension
- data
- feature matrix
- scattering
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 42
- 239000011159 matrix material Substances 0.000 claims abstract description 80
- 230000001502 supplementing effect Effects 0.000 claims abstract description 4
- 230000006870 function Effects 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 18
- 238000005070 sampling Methods 0.000 claims description 15
- 238000005457 optimization Methods 0.000 claims description 12
- 238000001914 filtration Methods 0.000 claims description 10
- 230000002146 bilateral effect Effects 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 9
- 238000000605 extraction Methods 0.000 claims description 8
- 238000010586 diagram Methods 0.000 description 10
- 230000003287 optical effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/02—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
- G01S7/41—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S13/00—Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
- G01S13/88—Radar or analogous systems specially adapted for specific applications
- G01S13/89—Radar or analogous systems specially adapted for specific applications for mapping or imaging
- G01S13/90—Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
- G01S13/9021—SAR image post-processing techniques
Abstract
The invention provides a multi-angle SAR target scattering anisotropy deduction method, which relates to the technical field of radar target characteristic analysis and comprises the following steps: converting SAR multi-angle image data of a target object into a two-dimensional feature matrix of angle and pixel dimensions; extracting pixel dimension priori input information from pixel dimension data of a two-dimensional feature matrix, and acquiring angle dimension priori input information based on SAR full-angle image data of a priori; performing backward scattering deduction on pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori input information to obtain a scattering deduction result; and supplementing the full-angle image data of the target object based on the scattering deduction result to reconstruct the target edge information and the geometric properties of the target object. The invention also provides a multi-angle SAR target scattering anisotropy deduction device, electronic equipment and a medium.
Description
Technical Field
The invention relates to the technical field of radar target characteristic analysis, in particular to a multi-angle SAR target scattering anisotropy deduction method, a device, electronic equipment and a medium.
Background
Synthetic Aperture Radar (SAR) is a high resolution imaging radar that can provide all-day, all-weather observations. The target scattering anisotropy characterizes the degree of response to radar illumination at different angles, and in particular exhibits strong scattering properties at certain angles. In the actual data acquisition process, most of adopted circular flight is used for acquiring multi-angle data, the stability of a platform is required to be high, a plurality of girth lines are adopted to be approximately round in the current multi-angle data acquisition process, but the cost is high in all-dimensional angle of a target to be acquired, so that the method has high practical value for modeling and analyzing the scattering characteristics of the target by utilizing limited multi-angle data.
On the basis of the total-angle analysis of the scattering anisotropy of the target, the one-dimensional backscattering curve of the target point may be different in the strong scattering areas of different structural parts, for example, the corresponding angles of the strong scattering of the wings and the fuselage of the aircraft target may be different. The current work of the scattering analysis of the all-angle target is mainly aimed at, and the modeling analysis of the scattering anisotropy of the limited-angle data is not performed yet.
Disclosure of Invention
In view of the above problems, the present invention provides a multi-angle SAR target scattering anisotropy deduction method, which combines an algorithm based on collaborative filtering technology to model limited data of a target along angle-pixel two dimensions, and obtains a research and judgment value of an unknown angle.
The first aspect of the invention provides a multi-angle SAR target scattering anisotropy deduction method, which comprises the following steps: converting SAR multi-angle image data of a target object into a two-dimensional feature matrix of angle and pixel dimensions; extracting pixel dimension priori input information from pixel dimension data of the two-dimensional feature matrix, and acquiring angle dimension priori input information based on SAR full-angle image data of the priori; performing backward scattering deduction on the pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori input information to obtain a scattering deduction result; and supplementing SAR full-angle image data of the target object based on the scattering deduction result so as to reconstruct target edge information and geometric properties of the target object.
According to an embodiment of the present invention, the extracting pixel dimension prior input information from the pixel dimension data of the two-dimensional feature matrix includes: sequentially sampling pixel dimension data in the two-dimensional feature matrix by utilizing a sliding window to obtain sampling data; and analyzing and processing the data of the target point in the sampling data through a bilateral filter to obtain pixel dimension priori input information.
According to an embodiment of the present invention, the analyzing and processing the data of the target point in the sampled data by the bilateral filter to obtain the pixel dimension prior input information includes: assigning weights to the sampling data based on the relative positional relationship of the sampling data through a bilateral filter; and calculating a filtering value of a target point based on the weight, and recording the filtering value as pixel dimension priori input information, wherein the target point is a sampling center point of the sliding window.
According to an embodiment of the present invention, the acquiring the angle dimension priori input information based on the a priori SAR full angle image data includes: generating a scattering curve based on the SAR full angle image data, the scattering curve representing the signal strength returned by the target object from different scattering angles; selecting one-dimensional anisotropic data from the scattering curve, wherein data corresponding to a strong scattering angle in the scattering curve does not belong to the selected range; selecting one-dimensional isotropic data from image data of an environment surrounding the target object; and taking the anisotropic data and the isotropic data as the angle dimension prior input information.
According to an embodiment of the present invention, the performing back-scatter deduction on the pixel dimension data of the two-dimensional feature matrix based on the pixel dimension prior input information and the angle dimension prior input information, to obtain a scatter deduction result includes: initializing an angle dimension feature matrix and a pixel dimension feature matrix; establishing an optimization function related to the angle dimension feature matrix and the pixel dimension feature matrix based on a collaborative filtering algorithm, wherein the input of the optimization function comprises the pixel dimension prior input information, the angle dimension prior information and the two-dimensional feature matrix; alternately minimizing and solving an angle dimension feature matrix and a pixel dimension feature matrix in the optimization function; calculating a loss function based on the angular dimension feature matrix and the pixel dimension feature matrix, and updating the angular dimension feature matrix and the pixel dimension feature matrix based on the calculated value of the loss function; repeating the steps to finish backward scattering deduction of the pixel dimension data of the two-dimensional feature matrix, and obtaining a scattering deduction result.
According to an embodiment of the present invention, the optimization function is:
wherein ,Wrepresenting the angular dimension feature matrix,Hrepresenting the pixel-dimensional feature matrix,f a the number of data representing the angle dimension is,f p the number of pixel-dimensional data is represented,kthe number of the selected angles is represented,the square of the difference between the two elements in brackets is shown,D ij elements representing the two-dimensional feature matrix at positions (i, j), x representing the angular dimension a priori input information, y representing the pixel dimension a priori input information,λis a regularization constraint factor.
According to an embodiment of the present invention, the complementing the SAR full angle image data of the target object based on the scattering deduction result to reconstruct target edge information and geometrical properties of the target object includes: comparing the scattering deduction result with SAR full-angle image data of the target object to obtain data curve characteristics, and correcting the SAR full-angle image data; reconstructing an image of the target object based on the SAR full angle image data to characterize target edge information of the target object; and comparing the number of pixels occupied by the target object in the reconstructed image and the SAR multi-angle image of the target object to represent the geometric attribute.
The second aspect of the present invention provides a multi-angle SAR target scattering anisotropy deduction device, comprising: the data dimension conversion module is used for converting SAR multi-angle image data of the target object into a two-dimensional feature matrix of angle dimension and pixel dimension; the dimension information extraction module is used for extracting pixel dimension priori input information from the pixel dimension data of the two-dimensional feature matrix and acquiring angle dimension priori input information based on the prior SAR full-angle image data; the target scattering deduction module is used for performing backward scattering deduction on the pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori information to obtain a scattering deduction result; and the target structure reconstruction module is used for complementing SAR full-angle image data of the target object based on the scattering deduction result so as to reconstruct target edge information and geometric attributes of the target object.
A third aspect of the present invention provides an electronic device, comprising: the system comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor realizes each step in the multi-angle SAR target scattering anisotropy deduction method according to any one of the first aspect when executing the computer program.
A fourth aspect of the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the multi-angle SAR target scattering anisotropy deduction methods of the first aspect.
The above at least one technical scheme adopted in the embodiment of the invention can achieve the following beneficial effects:
the embodiment of the invention provides a target scattering anisotropy deduction method based on multi-angle SAR, which converts a multi-angle data image from a traditional distance direction to an azimuth direction into angle-pixels, extracts corresponding two-dimensional data prior information respectively, and finally solves a two-dimensional feature matrix by utilizing a multi-label learning algorithm. According to the method, the limiting condition of analyzing the scattering characteristics of the target at the whole angle is solved by utilizing the target scattering deduction at the limited angle, meanwhile, unified modeling is carried out on target points with different strong scattering, the method is not limited to specific strong scattering angle deduction, and meanwhile, the result can give out the target edge information and the studying and judging of the geometric attribute.
Drawings
For a more complete understanding of the present invention, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1 schematically shows a flowchart of a multi-angle SAR target scattering anisotropy deduction method according to an embodiment of the present invention;
fig. 2 schematically illustrates a schematic diagram of a multi-angle SAR target scattering anisotropy deduction method according to an embodiment of the present invention;
fig. 3 schematically illustrates a block diagram of a multi-angle SAR target scattering anisotropy deduction device according to an embodiment of the present invention;
fig. 4 schematically shows a block diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings. It should be understood that the description is only illustrative and is not intended to limit the scope of the invention. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the invention. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the present invention.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It should be noted that the terms used herein should be construed to have meanings consistent with the context of the present specification and should not be construed in an idealized or overly formal manner.
Some of the block diagrams and/or flowchart illustrations are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, when executed by the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart.
Thus, the techniques of the present invention may be implemented in hardware and/or software (including firmware, microcode, etc.). Furthermore, the techniques of the present invention may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of the present invention, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a computer-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
Fig. 1 schematically shows a flowchart of a multi-angle SAR target scattering anisotropy deduction method according to an embodiment of the present invention.
As shown in FIG. 1, the multi-angle SAR target scattering anisotropy deduction method provided by the embodiment of the invention comprises S110-S140.
S110, SAR multi-angle image data of the target object are converted into a two-dimensional feature matrix of angle and pixel dimensions.
S120, extracting pixel dimension priori input information from pixel dimension data of the two-dimensional feature matrix, and acquiring angle dimension priori input information based on SAR full-angle image data of the priori.
S130, performing backward scattering deduction on the pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori input information to obtain a scattering deduction result.
S140, the SAR full-angle image data of the target object is complemented based on the scattering deduction result, so that the target edge information and the geometric properties of the target object are reconstructed.
The embodiment of the invention provides a target scattering anisotropy deduction method based on multi-angle SAR, which aims at analyzing the scattering characteristics of a target by utilizing limited multi-angle data and providing an angle range corresponding to strong scattering of the target. The method converts a multi-angle data image from a traditional distance direction to an azimuth direction into angle-pixels, extracts corresponding two-dimensional data prior information respectively, and finally solves a two-dimensional feature matrix by utilizing a multi-label learning algorithm to obtain a research and judgment value of an unknown angle.
The following describes a multi-angle SAR target scattering anisotropy deduction method according to an embodiment of the present invention in detail with reference to fig. 2.
Fig. 2 schematically illustrates a schematic diagram of a multi-angle SAR target scattering anisotropy deduction method according to an embodiment of the present invention.
As shown in fig. 2, in the present embodiment, S110 includes acquiring multi-angle SAR data of a target object using azimuth multi-angle capability of radar detection, and converting a multi-angle image of the target image from a conventional range-azimuth direction of the SAR image into a two-dimensional matrix of angle-pixels. The angle dimension represents different angles detected by the radar, the pixel dimension represents amplitude values in the images, each angle image is tiled and converted into one-dimensional data in the conversion process, and a two-dimensional matrix is finally formed by combining the newly added angles. The full-angle image is a full set of multi-angle images, which utilize partial angle images therein.
In S120, extracting pixel dimension prior input information from pixel dimension data of the two-dimensional feature matrix includes: sequentially sampling pixel dimension data in the two-dimensional feature matrix by utilizing a sliding window to obtain sampling data; and analyzing and processing the data of the target point in the sampling data through a bilateral filter to obtain pixel dimension priori input information.
Specifically, analyzing and processing data of a target point in the sampled data through a bilateral filter, and obtaining pixel dimension prior input information includes: weighting is distributed to each sampled data based on the relative position relation of the sampled data through a bilateral filter; and calculating a filtering value of a target point based on the weight, and recording the filtering value as pixel dimension priori input information, wherein the target point is a sampling center point of a sliding window.
In S120, obtaining angle dimension prior input information based on prior SAR full angle image data includes: generating a scattering curve based on the SAR full-angle image data, wherein the scattering curve represents the signal intensity returned by the target object from different scattering angles; selecting one-dimensional anisotropic data from a scattering curve, wherein data corresponding to a strong scattering angle in the scattering curve does not belong to the selected range; selecting one-dimensional isotropic data from image data of the surroundings of the target object, wherein the reference isotropic data selects a flat area from the surroundings of the target, representing differentiation from an artificial target; the anisotropic data and the isotropic data are used as angle dimension prior input information.
In S130, back scattering deduction is carried out on pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori input information, and scattering deduction results are obtained and comprise S131-S135.
S131, initializing an angle dimension feature matrix and a pixel dimension feature matrix.
And S132, establishing an optimization function on the angle dimension feature matrix and the pixel dimension feature matrix based on a collaborative filtering algorithm, wherein the input of the optimization function comprises pixel dimension priori input information, angle dimension prior information and two-dimensional feature matrix.
The optimization function is:
wherein ,Wrepresenting the angular dimension feature matrix,Hrepresentation ofThe pixel-dimensional feature matrix is described as,f a the number of data representing the angle dimension is,f p the number of pixel-dimensional data is represented,kthe number of the selected angles is represented,the square of the difference between the two elements in brackets is shown,D ij elements representing the two-dimensional feature matrix at positions (i, j), x representing the angular dimension a priori input information, y representing the pixel dimension a priori input information,λis a regularization constraint factor.
S133, alternatively minimizing and solving an angle dimension feature matrix and a pixel dimension feature matrix in the optimization function.
S134, calculating a loss function based on the angle dimension feature matrix and the pixel dimension feature matrix, and updating the angle dimension feature matrix and the pixel dimension feature matrix based on the calculated value of the loss function.
S135, repeating the steps to finish backward scattering deduction of the pixel dimension data of the two-dimensional feature matrix, and obtaining a scattering deduction result.
Wherein in the formulaThe result characterizes the backscattering characteristic of the point, the result gives out a two-dimensional deduction result of the sliding window sampling data, and the deduction result of each point in the sampling data along with the angle is corresponding to the two-dimensional deduction result, so that the final deduction result analysis is carried out.
In S140, the reconstructed target structure and the corresponding geometrical properties are given using the deduction result: the data of limited angles are input into the model, for example, one third of the angle data are used, and in the process of carrying out experiments on the point target, the remaining two thirds of the data corresponding to the point are complemented after the model is output, and then the data curve characteristics are compared with the original full angle data to analyze. After each point in the image is solved by using a deduction algorithm, a new deduction two-dimensional matrix, namely the image, is formed. Comparing the scattering deduction result with SAR full-angle image data of the target object to obtain data curve characteristics, and correcting the SAR full-angle image data; reconstructing an image of the target object based on the SAR full-angle image data to characterize target edge information of the target object; and comparing the number of pixels occupied by the target object in the reconstructed image and the SAR multi-angle image of the target object to characterize the geometric attribute.
The invention further provides a multi-angle SAR target scattering anisotropy deduction device, which has the same technical characteristics as the multi-angle SAR target scattering anisotropy deduction method shown in figures 1-2.
Fig. 3 schematically shows a block diagram of a multi-angle SAR target scattering anisotropy deduction device according to an embodiment of the present invention.
As shown in fig. 3, the multi-angle SAR target scattering anisotropy deduction device 300 provided in the embodiment of the present invention includes a data dimension conversion module 310, a dimension information extraction module 320, a target scattering deduction module 330 and a target structure reconstruction module 340.
The data dimension conversion module 310 is configured to convert the SAR multi-angle image data of the target object into a two-dimensional feature matrix of angle and pixel dimensions.
The dimension information extraction module 320 is configured to extract pixel dimension prior input information from pixel dimension data of the two-dimensional feature matrix, and obtain angle dimension prior input information based on prior SAR full-angle image data.
The target scatter deduction module 330 is configured to perform backscatter deduction on pixel dimension data of the two-dimensional feature matrix based on the pixel dimension prior input information and the angle dimension prior input information, so as to obtain a scatter deduction result.
The target structure reconstruction module 340 is configured to complement SAR full angle image data of the target object based on the scatter deduction result, so as to reconstruct target edge information and geometric properties of the target object.
It is understood that the data dimension conversion module 310, the dimension information extraction module 320, the target scatter deduction module 330 and the target structure reconstruction module 340 may be combined in one module to be implemented, or any one of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the invention, at least one of the data dimension conversion module 310, the dimension information extraction module 320, the target scatter deduction module 330 and the target structure reconstruction module 340 may be implemented at least in part as hardware circuitry, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or any other reasonable way of integrating or packaging a circuit, or in any other suitable combination of three implementations, such as hardware or firmware. Alternatively, at least one of the data dimension conversion module 310, the dimension information extraction module 320, the target scatter deduction module 330 and the target structure reconstruction module 340 may be at least partially implemented as computer program modules, which may perform the functions of the respective modules when the program is run by a computer.
Fig. 4 schematically shows a block diagram of an electronic device according to an embodiment of the present invention.
As shown in fig. 4, the electronic device described in the present embodiment includes: electronic device 400 includes processor 410, computer-readable storage medium 420. The electronic device 400 may perform the method described above with reference to fig. 1 to enable detection of a particular operation.
In particular, processor 410 may include, for example, a general purpose microprocessor, an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 410 may also include on-board memory for caching purposes. Processor 410 may be a single processing unit or a plurality of processing units for performing the different actions of the method flow described with reference to fig. 1 according to an embodiment of the invention.
The computer-readable storage medium 420 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices such as magnetic tape or hard disk (HDD); optical storage devices such as compact discs (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or a wired/wireless communication link.
The computer-readable storage medium 420 may include a computer program 421, which computer program 421 may include code/computer-executable instructions that, when executed by the processor 410, cause the processor 410 to perform the method flow as described above in connection with fig. 1 and any variations thereof.
The computer program 421 may be configured with computer program code comprising, for example, computer program modules. For example, in an example embodiment, code in computer program 421 may include one or more program modules, including 421A, module 421B, … …, for example. It should be noted that the division and number of modules is not fixed, and those skilled in the art may use suitable program modules or combinations of program modules according to the actual situation, which when executed by the processor 410, enable the processor 410 to perform the method flow and any variations thereof, such as described above in connection with fig. 1-2.
At least one of the data dimension conversion module 310, the dimension information extraction module 320, the target scatter deduction module 330 and the target structure reconstruction module 340 may be implemented as computer program modules described with reference to fig. 4, which when executed by the processor 310, may implement the respective operations described above.
The present invention also provides a computer-readable medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer readable medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present invention.
Those skilled in the art will appreciate that the features recited in the various embodiments of the invention can be combined in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the present invention. In particular, the features recited in the various embodiments of the invention can be combined and/or combined in various ways without departing from the spirit and teachings of the invention. All such combinations and/or combinations fall within the scope of the invention.
While the invention has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended embodiments and equivalents thereof. Thus, the scope of the invention should not be limited to the embodiments described above, but should be determined not only by the appended embodiments, but also by equivalents of the appended embodiments.
Claims (10)
1. The multi-angle SAR target scattering anisotropy deduction method is characterized by comprising the following steps of:
converting SAR multi-angle image data of a target object into a two-dimensional feature matrix of angle and pixel dimensions;
extracting pixel dimension priori input information from pixel dimension data of the two-dimensional feature matrix, and acquiring angle dimension priori input information based on SAR full-angle image data of the priori;
performing backward scattering deduction on the pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori input information to obtain a scattering deduction result;
and supplementing SAR full-angle image data of the target object based on the scattering deduction result so as to reconstruct target edge information and geometric properties of the target object.
2. The method of claim 1, wherein the extracting pixel dimension a priori input information from the pixel dimension data of the two-dimensional feature matrix comprises:
sequentially sampling pixel dimension data in the two-dimensional feature matrix by utilizing a sliding window to obtain sampling data;
and analyzing and processing the data of the target point in the sampling data through a bilateral filter to obtain pixel dimension priori input information.
3. The method of claim 2, wherein analyzing and processing the data of the target point in the sampled data via the bilateral filter to obtain the pixel dimension prior input information comprises:
assigning weights to the sampling data based on the relative positional relationship of the sampling data through a bilateral filter;
and calculating a filtering value of a target point based on the weight, and recording the filtering value as pixel dimension priori input information, wherein the target point is a sampling center point of the sliding window.
4. The method of claim 1, wherein the obtaining angular dimension a priori input information based on a priori SAR full angle image data comprises:
generating a scattering curve based on the SAR full angle image data, the scattering curve representing the signal strength returned by the target object from different scattering angles;
selecting one-dimensional anisotropic data from the scattering curve, wherein data corresponding to a strong scattering angle in the scattering curve does not belong to the selected range;
selecting one-dimensional isotropic data from image data of an environment surrounding the target object;
and taking the anisotropic data and the isotropic data as the angle dimension prior input information.
5. The method of claim 1, wherein performing backscatter deduction on pixel dimension data of the two-dimensional feature matrix based on the pixel dimension prior input information and the angle dimension prior input information to obtain a scatter deduction result comprises:
initializing an angle dimension feature matrix and a pixel dimension feature matrix;
establishing an optimization function related to the angle dimension feature matrix and the pixel dimension feature matrix based on a collaborative filtering algorithm, wherein the input of the optimization function comprises the pixel dimension prior input information, the angle dimension prior information and the two-dimensional feature matrix;
alternately minimizing and solving an angle dimension feature matrix and a pixel dimension feature matrix in the optimization function;
calculating a loss function based on the angular dimension feature matrix and the pixel dimension feature matrix, and updating the angular dimension feature matrix and the pixel dimension feature matrix based on the calculated value of the loss function;
repeating the steps to finish backward scattering deduction of the pixel dimension data of the two-dimensional feature matrix, and obtaining a scattering deduction result.
6. The method of claim 5, wherein the optimization function is:
wherein ,Wrepresenting the angular dimension feature matrix,Hrepresenting the pixel-dimensional feature matrix,f a the number of data representing the angle dimension is,f p the number of pixel-dimensional data is represented,kthe number of the selected angles is represented,the square of the difference between the two elements in brackets is shown,D ij elements representing the two-dimensional feature matrix at positions (i, j), x representing the angular dimension a priori input information, y representing the pixel dimension a priori input information,λis a regularization constraint factor.
7. The method of claim 1, wherein the supplementing SAR full angle image data of the target object based on the scatter deduction result to reconstruct target edge information and geometric properties of the target object comprises:
comparing the scattering deduction result with SAR full-angle image data of the target object to obtain data curve characteristics, and correcting the SAR full-angle image data;
reconstructing an image of the target object based on the SAR full angle image data to characterize target edge information of the target object;
and comparing the number of pixels occupied by the target object in the reconstructed image and the SAR multi-angle image of the target object to represent the geometric attribute.
8. A multi-angle SAR target scattering anisotropy deduction device, comprising:
the data dimension conversion module is used for converting SAR multi-angle image data of the target object into a two-dimensional feature matrix of angle dimension and pixel dimension;
the dimension information extraction module is used for extracting pixel dimension priori input information from the pixel dimension data of the two-dimensional feature matrix and acquiring angle dimension priori input information based on the prior SAR full-angle image data;
the target scattering deduction module is used for performing backward scattering deduction on the pixel dimension data of the two-dimensional feature matrix based on the pixel dimension priori input information and the angle dimension priori input information to obtain a scattering deduction result;
and the target structure reconstruction module is used for complementing SAR full-angle image data of the target object based on the scattering deduction result so as to reconstruct target edge information and geometric attributes of the target object.
9. An electronic device, comprising: a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the multi-angle SAR target scattering anisotropy deduction method according to any one of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the multi-angle SAR target scattering anisotropy deduction method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310339662.0A CN116047463A (en) | 2023-04-03 | 2023-04-03 | Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310339662.0A CN116047463A (en) | 2023-04-03 | 2023-04-03 | Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116047463A true CN116047463A (en) | 2023-05-02 |
Family
ID=86131727
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310339662.0A Pending CN116047463A (en) | 2023-04-03 | 2023-04-03 | Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116047463A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113640798A (en) * | 2021-08-11 | 2021-11-12 | 北京无线电测量研究所 | Radar target multi-angle reconstruction method and device and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110133682A (en) * | 2019-01-08 | 2019-08-16 | 西安电子科技大学 | Spaceborne comprehensive SAR adaptive targets three-dimensional rebuilding method |
WO2019184709A1 (en) * | 2018-03-29 | 2019-10-03 | 上海智瞳通科技有限公司 | Data processing method and device based on multi-sensor fusion, and multi-sensor fusion method |
CN114325709A (en) * | 2022-03-14 | 2022-04-12 | 中国科学院空天信息创新研究院 | Multi-angle spaceborne SAR imaging method, device, equipment and medium |
CN114519778A (en) * | 2022-03-02 | 2022-05-20 | 中国科学院空天信息创新研究院 | Target three-dimensional reconstruction method, device, equipment and medium for multi-angle SAR data |
-
2023
- 2023-04-03 CN CN202310339662.0A patent/CN116047463A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019184709A1 (en) * | 2018-03-29 | 2019-10-03 | 上海智瞳通科技有限公司 | Data processing method and device based on multi-sensor fusion, and multi-sensor fusion method |
CN110133682A (en) * | 2019-01-08 | 2019-08-16 | 西安电子科技大学 | Spaceborne comprehensive SAR adaptive targets three-dimensional rebuilding method |
CN114519778A (en) * | 2022-03-02 | 2022-05-20 | 中国科学院空天信息创新研究院 | Target three-dimensional reconstruction method, device, equipment and medium for multi-angle SAR data |
CN114325709A (en) * | 2022-03-14 | 2022-04-12 | 中国科学院空天信息创新研究院 | Multi-angle spaceborne SAR imaging method, device, equipment and medium |
Non-Patent Citations (1)
Title |
---|
YUE XIAOYANG, ET AL.: "Target anisotropic scattering deduction model using multi-aspect SAR data", ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, pages 153 - 168 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113640798A (en) * | 2021-08-11 | 2021-11-12 | 北京无线电测量研究所 | Radar target multi-angle reconstruction method and device and storage medium |
CN113640798B (en) * | 2021-08-11 | 2023-10-31 | 北京无线电测量研究所 | Multi-angle reconstruction method, device and storage medium for radar target |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108230329B (en) | Semantic segmentation method based on multi-scale convolution neural network | |
CN110472627B (en) | End-to-end SAR image recognition method, device and storage medium | |
WO2021000906A1 (en) | Sar image-oriented small-sample semantic feature enhancement method and apparatus | |
Acharya et al. | BIM-PoseNet: Indoor camera localisation using a 3D indoor model and deep learning from synthetic images | |
Xiao et al. | Three-dimensional point cloud plane segmentation in both structured and unstructured environments | |
CN110728658A (en) | High-resolution remote sensing image weak target detection method based on deep learning | |
Chen et al. | RangeSeg: Range-aware real time segmentation of 3D LiDAR point clouds | |
US8294712B2 (en) | Scalable method for rapidly detecting potential ground vehicle under cover using visualization of total occlusion footprint in point cloud population | |
CN116258658B (en) | Swin transducer-based image fusion method | |
Song et al. | Extraction and reconstruction of curved surface buildings by contour clustering using airborne LiDAR data | |
CN108010065A (en) | Low target quick determination method and device, storage medium and electric terminal | |
CN112630160A (en) | Unmanned aerial vehicle track planning soil humidity monitoring method and system based on image acquisition and readable storage medium | |
CN116047463A (en) | Multi-angle SAR target scattering anisotropy deduction method, device, equipment and medium | |
CN111458691B (en) | Building information extraction method and device and computer equipment | |
Geiss et al. | Inpainting radar missing data regions with deep learning | |
CN114219894A (en) | Three-dimensional modeling method, device, equipment and medium based on chromatography SAR point cloud | |
Zhang et al. | Hawk‐eye‐inspired perception algorithm of stereo vision for obtaining orchard 3D point cloud navigation map | |
Ghannadi et al. | Optimal texture image reconstruction method for improvement of SAR image matching | |
Hesami et al. | Range segmentation of large building exteriors: A hierarchical robust approach | |
D'Hondt et al. | Geometric primitive extraction for 3D reconstruction of urban areas from tomographic SAR data | |
Kusetogullari et al. | Unsupervised change detection in landsat images with atmospheric artifacts: a fuzzy multiobjective approach | |
CN116843906A (en) | Target multi-angle intrinsic feature mining method based on Laplace feature mapping | |
CN116503602A (en) | Unstructured environment three-dimensional point cloud semantic segmentation method based on multi-level edge enhancement | |
Salazar Colores et al. | Statistical multidirectional line dark channel for single‐image dehazing | |
CN114612315A (en) | High-resolution image missing region reconstruction method based on multi-task learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20230502 |
|
WD01 | Invention patent application deemed withdrawn after publication |