CN111414801B - Classification and identification method for electrically large non-cooperative target with uncertain shape - Google Patents

Classification and identification method for electrically large non-cooperative target with uncertain shape Download PDF

Info

Publication number
CN111414801B
CN111414801B CN202010100114.9A CN202010100114A CN111414801B CN 111414801 B CN111414801 B CN 111414801B CN 202010100114 A CN202010100114 A CN 202010100114A CN 111414801 B CN111414801 B CN 111414801B
Authority
CN
China
Prior art keywords
formula
sbr
uncertain
representing
triangular
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010100114.9A
Other languages
Chinese (zh)
Other versions
CN111414801A (en
Inventor
何姿
陈如山
丁大志
樊振宏
李宇晟
操龙潜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202010100114.9A priority Critical patent/CN111414801B/en
Publication of CN111414801A publication Critical patent/CN111414801A/en
Application granted granted Critical
Publication of CN111414801B publication Critical patent/CN111414801B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Radar Systems Or Details Thereof (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a classification and identification method for an electric large non-cooperative target with uncertain appearance, which comprises the following steps: firstly, establishing a model by using a non-rational B spline technology for an electrically large non-cooperative target, and combining the appearance change of the model with a rapid imaging formula based on SBR; then, two-dimensional ISAR images of the uncertain outline target under different incidence angles are obtained through a perturbation method, and a sample library of the uncertain outline electrically large non-cooperative target is established according to the images; and finally, using a convolutional neural network to classify and identify the electrically large non-cooperative target with shape uncertainty. The method is based on the principle of a perturbation method, random variables are introduced into an SBR (sequencing batch reactor) rapid imaging formula, the repeated modeling of the Monte Carlo method and the process of repeatedly calling an SBR rapid imaging solver to solve are avoided, the amplitude change of strong scattering points can be rapidly obtained, and therefore a sample library of uncertain shape targets is rapidly established for classification and identification of the targets.

Description

Classification and identification method for electrically large non-cooperative target with uncertain shape
Technical Field
The invention belongs to the field of target electromagnetic scattering characteristic numerical calculation, and particularly relates to a classification and identification method for an electrically large non-cooperative target with an uncertain shape.
Background
The principle of target identification and classification is that parameters such as the size and the shape of a target are estimated by utilizing characteristic information in target echoes through various mathematical multidimensional space transformations, and finally, an identification function is determined according to a large number of training samples and identification judgment is carried out in a classifier. Although there are many target detection and recognition methods, there is usually a limitation that a large number of samples are required to train the classifier, and in real-world applications, it is often difficult for people to find thousands of samples for a specific target. In the defense industry, ISAR imaging of radar targets is of great significance in Automatic Target Recognition (ATR) and stealth aircraft design, and a large number of images of the targets are needed for Recognition and judgment in the Recognition process, but simulation of the ISAR images is time-consuming and is especially the case when the Target shapes are uncertain. The rapid establishment of a sample library for electrically large non-cooperative targets with uncertain profiles is the focus of the research of the invention.
To solve the problem of uncertainty in shape, the Monte Carlo method (Monte Carlo method) is a very good tool, which is an important numerical calculation method proposed in the fortieth century for solving the problem of probability statistics. The monte carlo method uses random numbers to perform statistical experiments, and uses the obtained statistical values (such as mean value, probability and the like) as numerical solutions for solving problems. Compared with other numerical calculation methods, the Monte Carlo method has the following advantages: (1) the convergence rate is independent of the problem dimension; (2) the limitation by the problem condition is small; (3) when Monte Carlo calculation is realized on a computer, the program structure is clear and convenient to debug.
However, the monte carlo method has a slow convergence rate, a large number of calculation steps is often required to obtain a small convergence accuracy, and the calculation time is limited by the size of the computer memory.
Disclosure of Invention
The invention aims to provide a classification and identification method for an electric large non-cooperative target with uncertain outline.
The technical solution for realizing the purpose of the invention is as follows: a classification and identification method for an electric large non-cooperative target with uncertain outline comprises the following steps:
step 1, combining shape uncertainty and SBR fast imaging through NURBS modeling: dispersing the target surface by using a triangular mesh, expressing the coordinate of any point in an object by using a random variable alpha, and combining a term related to the triangular coordinate in the SBR rapid imaging formula with the random variable alpha;
step 2, establishing a sample library of the uncertain outline electrically large non-cooperative targets: determining the incident direction of the plane wave, solving an SBR (sequencing batch reactor) rapid imaging formula containing a random variable alpha according to a perturbation method to obtain a two-dimensional ISAR (inverse synthetic aperture radar) image of an uncertain shape target under a determined incident angle, and taking the image as a training set sample; obtaining a two-dimensional ISAR image of the uncertain outline target as a test set sample after changing the incident angle;
step 3, classifying and identifying the uncertain outline electric large non-cooperative targets: taking the pixel value of the two-dimensional ISAR image of the uncertain outline target in the training set sample as the input of a convolutional neural network, and establishing a neural network model after feature extraction; and inputting the test set sample into a neural network to obtain a classification recognition result.
Compared with the prior art, the invention has the following remarkable advantages: (1) the modeling of the object with uncertain outline is convenient, and the outline of the object can be changed by a small number of mutually independent control points; dispersing the target surface into a triangular surface element, and associating the space coordinate of the triangular surface element with a random variable, so that the random variable is conveniently introduced into an SBR (sequencing batch reactor) rapid imaging formula; (2) based on the principle of a perturbation method, the ubiquitous shape uncertainty is combined with an SBR (sequencing batch reactor) rapid imaging formula, the information of the amplitude change of the strong scattering point can be rapidly and correctly obtained only by modeling a theoretical model once and calculating the formula result once, repeated modeling of the model with multiple shape changes and repeated calling of a solver of the SBR rapid imaging are not needed, and the calculation time is greatly reduced; (3) the speed of establishing the sample library is high; after the information of the strong scattering points is rapidly obtained, all ISAR images under all scanning angles and all sampling points in a random variable interval are rapidly output, and then a training set sample library and a test set sample library can be rapidly established.
Drawings
FIG. 1 is a B2 model airplane diagram.
FIG. 2 is a diagram of an aircraft model F22.
Fig. 3 is a schematic illustration of a B2 aircraft modeled with NURBS.
FIG. 4 is a control point diagram for controlling camber change of a B2 aircraft.
FIG. 5 is a graph of
Figure BDA0002386588400000021
Angle schematic.
FIG. 6 is a graph of pixel point contrast results for a B2 aircraft model two-dimensional ISAR image with an indeterminate profile.
Fig. 7 is a B2 aircraft two-dimensional ISAR image.
FIG. 8 is a F22 aircraft two-dimensional ISAR image.
FIG. 9 is a graph of training accuracy and validation accuracy in a convolutional neural network.
FIG. 10 is a graph illustrating training loss rate and validation loss rate in a convolutional neural network.
Detailed Description
To overcome the above-mentioned shortcomings of the Monte Carlo method, a Perturbation method (Perturbation method) is proposed to solve the uncertainty problem. In electromagnetism, it is an approximation method for solving the electromagnetic field characteristic value of a system with a small change relative to the original system, and as long as the characteristics and distribution rule of the original system are known (e.g. scattered field value or field distribution, etc.), a small change can be superimposed on the initial value of the original system by using a perturbation formula to approximate the system after the small change. Compared with a Monte Carlo method, the disturbance method is short in calculation time, and can be used for rapidly analyzing the electromagnetic characteristics of the three-dimensional target with uncertain appearance. The method is based on the principle of a perturbation method, random variables are introduced into an SBR (sequencing batch reactor) rapid imaging formula, the repeated modeling of the Monte Carlo method and the process of repeatedly calling an SBR rapid imaging solver to solve are avoided, the amplitude change of strong scattering points can be rapidly obtained, and therefore a sample library of uncertain shape targets is rapidly established for classification and identification of the targets.
The present invention is described in further detail below with reference to the attached drawing figures.
The invention provides a classification and identification method for an electric large non-cooperative target with uncertain shape, which combines an uncertain shape modeling technology and an SBR rapid imaging formula and carries out classification and identification on the target, and comprises the following steps:
step 1, combining shape uncertainty and SBR fast imaging through NURBS (Non-uniform Rational B-spline, NURBS) modeling: dispersing the target surface by using a triangular mesh, expressing the coordinate of any point in an object by using a random variable alpha, and combining a term related to the triangular coordinate in the SBR rapid imaging formula with the random variable alpha; the method specifically comprises the following steps:
establishing a fast imaging formula based on SBR (swimming and bounding Rays, SBR), wherein O (x, z) represents an object shape function, and the formula is as follows:
Figure BDA0002386588400000031
Figure BDA0002386588400000032
h(x,z)=sinc(k 0 θ 0 x)·sinc(Δkz) (3)
where j is the inverse of the unit of imaginary numbers, k 0 Representing wave number, theta, corresponding to the center frequency 0 Represents the sweep width, Δ k represents the sweep width, x represents the azimuth direction, and z represents the range direction. (Δ A) iexit The area of each triangular bin is shown,
Figure BDA0002386588400000033
means for summing the sums over all tubes, x i X component, z, representing the barycentric location vector of the ith triangular bin i The z-component of the gravity location vector representing the ith triangular bin,
Figure BDA0002386588400000034
which represents the direction of incidence of the incident wave,
Figure BDA0002386588400000035
representing the centroid position vector for each triangular bin.
B irays The quantity related to the electromagnetic field under a small angle approximation is given by the formula:
B irays =(-s 1 E 3 +s 3 E 1 -s 2 H 3 +s 3 H 2 )+θ(s 1 H 2 -s 2 H 1 ) (4)
wherein s is 1 X component, s, representing the direction of reflection 2 The y component, s, representing the direction of reflection 3 The z-component representing the direction of reflection. E 1 Denotes the x-component of the electric field, E 2 Denotes the y-component of the electric field, E 3 Representing the z-component of the electric field. H 1 Representing the x-component of the magnetic field, H 2 Representing the y-component of the magnetic field, H 3 Representing the z-component of the magnetic field. Theta represents the pitch angle of the incident wave
And writing O (x, z) into a convolution form by utilizing the characteristics of the sinc function to perform accelerated calculation. As shown in formula (5):
O(x,z)=I(x,z)*h(x,z) (5)
wherein
Figure BDA0002386588400000041
Middle δ represents the shock function; and converting the non-uniform sampling pulse sequence I (x, z) into a uniform sampling sequence I' (x, z):
Figure BDA0002386588400000042
where Δ x is the azimuthal resolution, Δ z is the range resolution, (m) a ) i ,(m b ) i ,(n a ) i ,(n b ) i For the interpolation index associated with the ith triangular bin, (β) a ) i ,(β b ) i ,(β c ) i ,(β d ) i The specific formula of the interpolation coefficient related to the ith triangular bin is shown as formula (7) and formula (8):
Figure BDA0002386588400000043
Figure BDA0002386588400000044
introducing a random variable alpha into an SBR rapid imaging formula to obtain:
Figure BDA0002386588400000045
i.e. the shape uncertainty is combined with the SBR fast imaging formula.
Step 2, establishing a sample library of the uncertain outline large non-cooperative target: determining the incident direction of a plane wave, solving an SBR (sequencing batch reactor) rapid imaging formula containing a random variable alpha according to a perturbation method, obtaining two-dimensional ISAR (Inverse-Synthetic-Aperture-Radar) images of an uncertain outline target under a determined incident angle, and taking the images as training set samples; the method comprises the following steps of obtaining a two-dimensional ISAR image of an uncertain outline target as a test set sample after changing an incident angle, wherein the two-dimensional ISAR image is as follows:
from equation (9), when a random variable is introduced into the bin triangle coordinates, uncertainty is introduced into the SBR fast imaging equation. For an uncertainty shape, it can be expressed as a corresponding random variable α i In the closed interval [ alpha ] c -Δα,α c +Δα]Internal random value, wherein alpha c The value of a random variable when the model is not changed is represented, and delta alpha represents the maximum variation of the random variable when the appearance of the model is changed; i is 1, … n, where n represents the number of random variables;
based on the principle of perturbation method, equation (9) can be at α c The process is expanded by a first order taylor series as follows:
Figure BDA0002386588400000051
n represents the number of random variables, Δ α i Represents the variation of the ith random variable;
Figure BDA0002386588400000052
expressed by the formula that the SBR rapid imaging is performed when O (alpha) is in alpha c The partial derivative of the random variable.
Therefore, the variation Δ b of the perturbation of the shape function due to the uncertainty shape can be obtained by equation (10) as:
Figure BDA0002386588400000053
Figure BDA0002386588400000054
Figure BDA0002386588400000055
when the direction of the random variable is the z direction, the formula of the partial derivative is as follows:
Figure BDA0002386588400000056
wherein N and M respectively represent the number of control points in two directions of the two-dimensional plane,
Figure BDA0002386588400000057
for the piecewise rational basis function at the 1 st vertex of the ith triangular bin,
Figure BDA0002386588400000058
for the piecewise rational basis function at the 2 nd vertex of the ith triangular bin,
Figure BDA0002386588400000059
the piecewise rational basis function at the 3 rd vertex of the ith triangular bin. k is a radical of iz The z-component representing the incident direction vector of the incident wave.
Figure BDA0002386588400000061
Figure BDA0002386588400000062
Figure BDA0002386588400000063
Figure BDA0002386588400000064
Figure BDA0002386588400000065
For different variations Δ α i When the incident wave direction, the electrical size of the object and the working frequency are determined, the value calculated by the formula (13) is not changed, and only once calculation is needed in the calculation process. When the incident wave pitch angle is determined, obtaining the value O (x, z) of each pixel point in the two-dimensional ISAR image under all the variable quantities according to the formula (10), scanning the incident wave azimuth angle within a certain angle range to obtain the two-dimensional ISAR images of the uncertain non-cooperative targets in different postures, and taking the images as training set samples. And changing the incident wave pitch angle, and establishing a test set sample according to the mode of establishing the training set sample.
Step 3, classifying and identifying the uncertain outline electric large non-cooperative targets: taking the pixel value of the two-dimensional ISAR image of the uncertain outline target in the training set sample as the input of a convolutional neural network, and establishing a neural network model after feature extraction; and inputting the test set sample into a neural network to obtain a classification recognition result.
And (3) taking the pixel value of the two-dimensional ISAR image in the training set sample obtained in the step (2) as an input layer input into a convolutional neural network, extracting the features of the pixel value of the image by using the convolutional layer, then propagating the extracted features forward, adjusting each weight and threshold value in the network by reversely propagating the difference value of the network output value and the data label until the conditions are met or the maximum iteration number is exceeded, and then storing the parameters in the network reaching the stability. And then inputting the ISAR images in the test set into the trained model as the test set for testing, and comparing the data labels to obtain the test accuracy and the test loss rate.
The present invention will be described in detail with reference to examples.
Examples
In this embodiment, classification and identification are performed on a B2 airplane and an F22 airplane with uncertain outlines, and this embodiment is implemented on a computing platform with inter (r) core (tm) i7-7700K CPU @3.6GHz and 8GB memory. B2 airplane and F22 airplane model As shown in FIGS. 1 and 2, the B2 airplane has a total length of 20.98m, a wingspan of 52.38m and a height of 3.05 m; the F22 airplane has a total length of 18.92m, a span of 13.56m and a height of 4.03 m. Taking the B2 airplane model as an example, by modeling using a non-rational B-spline surface modeling method, the B2 model may be constructed using 56 NURBS patches whose shape is controlled by 334 control points, as shown in FIG. 3. The radian of the machine head ranges from [ -0.1 lambda, 0.1 lambda]Where λ is the wavelength. The arc of the handpiece is controlled only by the Z-coordinate of the 5 control points as shown in fig. 4. Therefore, it is only necessary to set the Z-direction coordinates of the 5 control points as random variables. Shown in FIG. 5 as θ and
Figure BDA0002386588400000072
a schematic diagram of angles, where theta is the pitch angle,
Figure BDA0002386588400000073
is the azimuth angle. The plane wave incidence direction is theta 90 degrees,
Figure BDA0002386588400000075
incident along the handpiece. The observation angle is 90 degrees when theta is equal to theta,
Figure BDA0002386588400000074
within a small angular range of the vicinity. Under the incident angle, the B2 airplane model with uncertain appearance is analyzed by the method and the Monte Carlo method of sampling 1000 times, the pixel value comparison graph and the statistical change result of each pixel point of the image are shown in FIG. 6, and it can be seen that the two curves are well matched. A comparison of memory requirements and computation time for the two methods is shown in table 1.
TABLE 1
Figure BDA0002386588400000071
As can be seen from table 1, the difference between the memory required by the method of the present invention and the memory required by the monte carlo method is not large, but the calculation time is far shorter than that of the monte carlo method with 1000 samples. The method has the advantage of higher calculation speed than the Monte Carlo method. Fig. 7 shows a B2 airplane two-dimensional ISAR image, fig. 8 shows an F22 airplane two-dimensional ISAR image, and after a B2 airplane and an F22 airplane sample library of uncertain shapes are built, the objects are classified and identified through machine learning. FIG. 9 shows the validation accuracy of the test set, and FIG. 10 shows the test set validation loss rate. As shown in fig. 9 and fig. 10, as the number of iteration steps increases, the verification and loss rate tends to be stable, the verification accuracy is very high, which can reach more than 95%, and classification and identification of B2 airplanes and F22 airplanes with uncertain outlines can be realized.

Claims (1)

1. A classification and identification method for an electric large non-cooperative target with uncertain appearance is characterized by comprising the following steps:
step 1, combining shape uncertainty and SBR fast imaging through NURBS modeling: dispersing the target surface by using a triangular mesh, expressing the coordinate of any point in an object by using a random variable alpha, and combining a term related to the triangular coordinate in the SBR rapid imaging formula with the random variable alpha;
establishing an SBR rapid imaging formula connected with the shape uncertainty: the target surface is dispersed by a triangular grid, NURBS modeling can express the coordinate of any point on an object by a random variable alpha, and terms related to the triangular coordinate in an SBR quick imaging formula are combined with the random variable alpha, and the specific method is as follows:
establishing an SBR-based rapid imaging formula, wherein O (x, z) represents an object shape function, and the formula is as follows:
Figure FDA0003684272960000011
Figure FDA0003684272960000012
h(x,z)=sinc(k 0 θ 0 x)·sinc(Δkz) (3)
where j represents the inverse of the imaginary unit, k 0 Representing wave number, theta, corresponding to the center frequency 0 Representing the sweep width, Δ k representing the sweep width, x representing the azimuth direction, and z representing the range direction; (Δ A) iexit The area of each triangular bin is shown,
Figure FDA0003684272960000013
means for summing the sums over all tubes, x i X component, z, representing the barycentric location vector of the ith triangular bin i The z-component of the gravity location vector representing the ith triangular bin,
Figure FDA0003684272960000014
which represents the direction of incidence of the incident wave,
Figure FDA0003684272960000015
representing a gravity center position vector of each triangular surface element;
B irays the quantity related to the electromagnetic field under a small angle approximation is given by the formula:
B irays =(-s 1 E 3 +s 3 E 1 -s 2 H 3 +s 3 H 2 )+θ(s 1 H 2 -s 2 H 1 ) (4)
wherein s is 1 X component, s, representing the direction of reflection 2 The y component, s, representing the direction of reflection 3 A z-component representing the direction of reflection; e 1 Denotes the x-component of the electric field, E 2 Denotes the y-component of the electric field, E 3 Represents the z-component of the electric field; h 1 Representing the x-component of the magnetic field, H 2 Representing the y-component of the magnetic field, H 3 Represents the z-component of the magnetic field; theta represents the pitch angle of the incident wave;
and writing O (x, z) into a convolution form by using the characteristics of the sinc function to perform accelerated calculation, wherein the formula (5) is as follows:
O(x,z)=I(x,z)*h(x,z) (5)
wherein
Figure FDA0003684272960000016
Wherein δ represents an impulse function; and then converting the non-uniform sampling pulse sequence I (x, z) into a uniform sampling sequence I' (x, z) by an interpolation method:
Figure FDA0003684272960000021
where Δ x is the azimuthal resolution, Δ z is the range resolution, (m) a ) i ,(m b ) i ,(n a ) i ,(n b ) i For the interpolation index associated with the ith triangular bin, (β) a ) i ,(β b ) i ,(β c ) i ,(β d ) i The specific formula of the interpolation coefficient related to the ith triangular bin is shown as formula (7) and formula (8):
Figure FDA0003684272960000022
Figure FDA0003684272960000023
introducing a random variable alpha into an SBR rapid imaging formula to obtain:
Figure FDA0003684272960000024
namely, the shape uncertainty is combined with an SBR rapid imaging formula;
step 2, establishing a sample library of the uncertain outline electrically large non-cooperative targets: determining the incident direction of the plane wave, solving an SBR (sequencing batch reactor) rapid imaging formula containing a random variable alpha according to a perturbation method to obtain a two-dimensional ISAR (inverse synthetic aperture radar) image of an uncertain shape target under a determined incident angle, and taking the image as a training set sample; obtaining a two-dimensional ISAR image of the uncertain outline target as a test set sample after changing the incident angle; the specific method comprises the following steps:
the formula (9) can be used, and when a random variable is introduced into the bin triangular coordinate, the uncertainty is introduced into the SBR rapid imaging formula; for an uncertainty shape, it can be expressed as a corresponding random variable α i In the closed interval [ alpha ] c -Δα,α c +Δα]Internal random value, wherein alpha c The value of a random variable when the model is not changed is represented, and delta alpha represents the maximum variation of the random variable when the appearance of the model is changed; i is 1, … n, where n represents the number of random variables;
according to the principle of perturbation method, formula (9) can be at α c The process is expanded by a first order taylor series as follows:
Figure FDA0003684272960000031
n represents the number of random variables, Δ α i Represents the variation of the ith random variable;
Figure FDA0003684272960000032
denotes SBR Rapid imaging formula O (alpha) at alpha c Partial derivatives of the random variables;
therefore, the amount of change Δ b in the perturbation of the shape function due to the uncertainty shape can be obtained by equation (10) as follows:
Figure FDA0003684272960000033
Figure FDA0003684272960000034
Figure FDA0003684272960000035
when the direction of the random variable is the z direction, the formula of the partial derivative is as follows:
Figure FDA0003684272960000036
wherein N and M respectively represent the number of control points in two directions of the two-dimensional plane,
Figure FDA0003684272960000037
for the piecewise rational basis function at the 1 st vertex of the ith triangular bin,
Figure FDA0003684272960000038
for the piecewise rational basis function at the 2 nd vertex of the ith triangular bin,
Figure FDA0003684272960000039
the piecewise rational basis function on the 3 rd vertex of the ith triangular surface element is taken as the first triangular surface element; k is a radical of iz Z component representing incident direction vector of incident wave
Figure FDA00036842729600000310
Figure FDA0003684272960000041
Figure FDA0003684272960000042
Figure FDA0003684272960000043
Figure FDA0003684272960000044
For different variations Δ α i When the incident wave direction, the electrical size of the object and the working frequency are determined, the value calculated by the formula (13) is not changed, and only once calculation is needed in the calculation process; when the incident wave pitch angle is determined, obtaining the value O (x, z) of each pixel point in the two-dimensional ISAR images under all the variable quantities according to a formula (10), scanning the incident wave azimuth angle within a certain angle range to obtain the two-dimensional ISAR images of the uncertain non-cooperative targets in different postures, and taking the images as training set samples; changing the incident wave pitch angle, and establishing a test set sample according to a training set sample establishing mode;
step 3, classifying and identifying the uncertain outline electric large non-cooperative targets: taking the pixel value of the two-dimensional ISAR image of the uncertain outline target in the training set sample as the input of a convolutional neural network, and establishing a neural network model after feature extraction; and inputting the test set sample into a neural network to obtain a classification recognition result.
CN202010100114.9A 2020-02-18 2020-02-18 Classification and identification method for electrically large non-cooperative target with uncertain shape Active CN111414801B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010100114.9A CN111414801B (en) 2020-02-18 2020-02-18 Classification and identification method for electrically large non-cooperative target with uncertain shape

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010100114.9A CN111414801B (en) 2020-02-18 2020-02-18 Classification and identification method for electrically large non-cooperative target with uncertain shape

Publications (2)

Publication Number Publication Date
CN111414801A CN111414801A (en) 2020-07-14
CN111414801B true CN111414801B (en) 2022-08-12

Family

ID=71492767

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010100114.9A Active CN111414801B (en) 2020-02-18 2020-02-18 Classification and identification method for electrically large non-cooperative target with uncertain shape

Country Status (1)

Country Link
CN (1) CN111414801B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713284A (en) * 2012-09-28 2014-04-09 中国航天科工集团第二研究院二O七所 SBR and PO technology-based strong scattering center calculation method
CN110196961A (en) * 2018-02-26 2019-09-03 南京理工大学 Non- cooperation does not know the rebecca echo prediction method of shape
CN110362877A (en) * 2019-06-24 2019-10-22 南京理工大学 The environment electromagnetics analysis of scattering method of uncertain factor
CN110580742A (en) * 2018-06-07 2019-12-17 南京理工大学 method for achieving modeling and analysis of target electromagnetic scattering characteristics based on GPU parallel SBR

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103713284A (en) * 2012-09-28 2014-04-09 中国航天科工集团第二研究院二O七所 SBR and PO technology-based strong scattering center calculation method
CN110196961A (en) * 2018-02-26 2019-09-03 南京理工大学 Non- cooperation does not know the rebecca echo prediction method of shape
CN110580742A (en) * 2018-06-07 2019-12-17 南京理工大学 method for achieving modeling and analysis of target electromagnetic scattering characteristics based on GPU parallel SBR
CN110362877A (en) * 2019-06-24 2019-10-22 南京理工大学 The environment electromagnetics analysis of scattering method of uncertain factor

Also Published As

Publication number Publication date
CN111414801A (en) 2020-07-14

Similar Documents

Publication Publication Date Title
CN108052942B (en) Visual image recognition method for aircraft flight attitude
CN111524129B (en) Aircraft skin butt joint gap calculation method based on end face extraction
CN109559338A (en) A kind of three-dimensional point cloud method for registering estimated based on Weighted principal component analysis and M
CN112819962A (en) Non-uniform grid division and local grid density method in digital image correlation
CN113376597A (en) Complex terrain electromagnetic scattering rapid simulation method based on digital elevation map and GPU
CN109557533A (en) Model-based joint tracking and identification method
CN110362877B (en) Environmental electromagnetic scattering characteristic analysis method for uncertain factors
Yuan et al. 3D point cloud recognition of substation equipment based on plane detection
CN111414801B (en) Classification and identification method for electrically large non-cooperative target with uncertain shape
CN116051540B (en) Method and system for acquiring positioning pose of transformer wiring terminal based on point cloud model
Turner RESPECT: Rapid electromagnetic scattering predictor for extremely complex targets
CN111368398B (en) Electromagnetic scattering characteristic analysis method and device for electrically large target with uncertain structure
CN110196961B (en) Aircraft radar echo prediction method of non-cooperative uncertain shape
CN115471763A (en) SAR image airplane target identification method based on semantic and textural feature fusion
CN115169170A (en) Composite target scattering semi-analytic rapid calculation method based on non-uniform grid model
CN105279320B (en) A kind of method for generating FDTD grids
CN114910892A (en) Laser radar calibration method and device, electronic equipment and storage medium
CN110287549B (en) RCS prediction method for multi-position thin coating aircraft with uncertain sources
Cilliers et al. Considering CAD model accuracy for Radar Cross Section and signature calculations of electrically large complex targets
Wei et al. Learning surface scattering parameters from SAR images using differentiable ray tracing
CN113821967A (en) Large sample training data generation method based on scattering center model
Zhuang et al. Accurate statistical modeling method for dynamic RCs
Jia et al. Generation of parametric aircraft models from a cloud of points
Wang et al. Edge diffraction in NURBS-UTD method
CN117706490B (en) Method for modeling coupling scattering center between metal targets based on single-station radar

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant