CN110223342B - Space target size estimation method based on deep neural network - Google Patents

Space target size estimation method based on deep neural network Download PDF

Info

Publication number
CN110223342B
CN110223342B CN201910520018.7A CN201910520018A CN110223342B CN 110223342 B CN110223342 B CN 110223342B CN 201910520018 A CN201910520018 A CN 201910520018A CN 110223342 B CN110223342 B CN 110223342B
Authority
CN
China
Prior art keywords
layer
rcs
neural network
deep neural
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910520018.7A
Other languages
Chinese (zh)
Other versions
CN110223342A (en
Inventor
周代英
李雄
赖陈潇
黎晓烨
冯健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910520018.7A priority Critical patent/CN110223342B/en
Publication of CN110223342A publication Critical patent/CN110223342A/en
Application granted granted Critical
Publication of CN110223342B publication Critical patent/CN110223342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Feedback Control In General (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a space target size estimation method based on a deep neural network. The method utilizes the deep neural network to pre-train, establishes the relation between the spatial target RCS sequence and the size, inputs the spatial target RCS sequence without known size information into the trained deep neural network, and finally obtains the size estimation result of the spatial target. Compared with the traditional space target size estimation method, the method does not need to establish a target size estimation model in advance in a manual mode, and utilizes a deep learning mechanism to automatically learn the internal association between the representation target size and the target RCS data sequence, so that the accuracy of space target size estimation is improved.

Description

Spatial target size estimation method based on deep neural network
Technical Field
The invention belongs to the technical field of neural network and spatial target feature extraction, and relates to a spatial target size estimation method based on a deep neural network.
Background
The space target identification technology is one of key technologies of a space situation perception system, and is mainly used for extracting characteristic information of a space target, and the structure and size information of the space target is a remarkable characteristic, so that the space target can be identified later. The radar scattering cross section (RCS) is used as narrow-band information, rich characteristics of targets are contained, and how to extract structure and size information from a space target RCS sequence is of great significance. However, the spatial target RCS sequence is affected by various factors such as the shape, posture and scattering characteristics of the target, so that the spatial target RCS sequence is a non-stationary signal, which increases the difficulty of data processing.
According to the traditional space target size estimation method, a large amount of RCS measured data is applied, space fragments are equivalent to spheres, and a space target size estimation model is established. And then, by improvement, an ellipsoid model is proposed, and the short axis and the long axis of the ellipsoid are estimated by using a thresholding method. However, these methods require a mapping relationship between RCS and the target size and also obtain a large amount of measured data, but the measured data is not easy to acquire due to practical limitations. And errors exist in the characteristic size mapping model, and estimation errors are increased by converting each value in the RCS sequence into the size through the model and then processing the size.
Disclosure of Invention
In order to solve the problems, the invention provides a space target size estimation method based on a deep neural network. The method of the invention enables the space target to be equivalent to an ellipsoid, but does not need to establish a space target size mapping model in advance, but establishes the association between the space target RCS sequence and the target size through the deep neural network, and after the deep neural network is trained, the influence of the traditional method on the size estimation result can be reduced, and the accuracy of the size estimation is improved.
The technical scheme of the invention is that the RCS sequence of the space target is set as { RCS1,rcs2,rcs3...,rcsNAnd equating the space target to an ellipsoid, wherein the size estimation of the space target mainly estimates a short axis and a long axis thereof, and the deep neural network structure for estimating the size of the space target is shown in FIG. 1:
the steps of the spatial target size estimation based on the deep neural network are as follows:
step 1 network initialization, the input layer of the deep neural network used by the invention is RCS sequence of space target namely { RCS1,rcs2,rcs3...,rcsNThe output layer is the minor and major axes of the spatial target, i.e. { o }1,o2The learning rate is η, and the excitation function is g (x). Wherein the excitation function g (x) takes sigmoid function, the expression is:
Figure BDA0002096091900000021
step 2, calculating the output of each layer of the deep neural network:
aj 1=rcsj (2)
aj l=g(∑ωjk laj l-1+bj l) (3)
wherein, aj 1Representing the output values of the input layer, i.e. the original RCS sequence. When l is more than or equal to 2 and less than or equal to M, aj lDenotes the lthJ in a layerthOutput value of neuron, omegajk lRepresenting the slave neural network (l-1)thK in the layerthNeuron to lthJ in a layerthConnection weight of neurons, bj lDenotes the lthJ in a layerthThe biasing of the neurons. From the above, it can be seen that representing the output by algebraic method is complicated, and the above formula can be rewritten as a matrix representation:
al=g(zl)=g(Wlal-1+bl) (4)
wherein z islDenotes the lthLinear output before layer is not activated.
Step 3 takes the mean square error of all output layer nodes of the deep neural network as an objective function, i.e. for each sample we expect to minimize the following equation:
Figure BDA0002096091900000022
and (3) optimizing by using a gradient descent algorithm, and calculating the error of an output layer:
σj M=aj M(1-aj M)(oj-aj M) (6)
wherein σj MIs output layer jthError term of neuron, aj MRepresents output layer jthOutput value of neuron, ojIndicating that the sample corresponds to the output layer jththA target value for the neuron.
Step 4, calculating the errors of all nodes of the current layer, namely calculating the error terms of the next layer of nodes connected with the current layer, and then obtaining the expression of the hidden layer errors through an error back propagation algorithm, wherein the expression is as follows:
σj l=aj l(1-aj l)Wl+1σj l+1 (7)
wherein σj lFirstthJ in a layerthError term of neuron, aj lDenotes the lthJ in a layerthOutput value of neuron, Wl+1Denotes the lthLayer and (l +1)thWeight matrix formed by the weights of the layers, σj l+1No. (l +1)thJ in a layerthError term of the neuron.
Step 5, after error terms of all nodes are calculated, updating the weight and the bias of each layer of the deep neural network according to the following expression:
Figure BDA0002096091900000031
Figure BDA0002096091900000032
step 6 ends the iteration of the algorithm, and there are many methods to determine whether the algorithm has converged, for example, by specifying the number of iterations or determining whether the difference between two adjacent errors is smaller than a specified value. After the deep neural network parameters are obtained, the depth network can be adopted to estimate the size of the space target according to the RCS sequence of the space target.
The method has the advantages that the deep neural network is utilized to pre-train, the relation between the spatial target RCS sequence and the size is established, the spatial target RCS sequence with unknown size information is input into the trained deep neural network, and finally the size estimation result of the spatial target is obtained. Compared with the traditional space target size estimation method, the method provided by the invention has the advantages that a target size estimation model is not required to be established in advance in a manual mode, the internal association between the representation target size and the target RCS data sequence is automatically learned by utilizing a deep learning mechanism, and the accuracy of space target size estimation is improved.
Drawings
Fig. 1 is a diagram of a deep neural network architecture.
Detailed Description
The effectiveness of the inventive solution is schematically illustrated in connection with simulations.
The simulation experiment parameters are as follows: the carrier frequency of the radar is 10GHz, the distance between the radar and the center of the satellite is 400KM, the azimuth angle of the radar is 30 degrees, the pitch angle is 15 degrees, the spin cycle of the satellite is 60 revolutions per minute, the number of sampling points is 100, and the signal-to-noise ratio is set to be 15 dB.
The deep neural network parameters are: the dimension of the input layer is 100, the dimension of the output layer is 2, the dimensions of the hidden layer are 150, 200 and 150 respectively, the learning rate is 0.009, and the number of iterations is 5000.
The sizes of the six selected space targets are respectively as follows: target 1: the minor axis is 4.70m, and the major axis is 6.80 m; target 2: the minor axis is 4.00m, and the major axis is 6.60 m; target 3: the minor axis is 3.1m, and the major axis is 5.20 m; target 4: short axis 4.60m, long axis 5.40 m; target 5: the minor axis is 3.40m and the major axis is 5.80 m.
A comparison of the estimates made using the conventional method and the method of the present invention is shown in Table 1
TABLE 1 results of spatial target size estimation (relative error) for two methods
Figure BDA0002096091900000041
From the experimental results, the overall relative error of the size estimation of the space target by using the traditional method is 30.4%, wherein the average relative error of the short axis estimation is 40.8%, and the average relative error of the long axis estimation is 20.0%; the overall relative error of the method for estimating the size of the space target based on the deep neural network is 5.0 percent, wherein the average relative error of the short axis estimation is 3.4 percent, and the average relative error of the long axis estimation is 6.5 percent. The short axis and the long axis of the target reflect the shape information of the target, have obvious physical significance and are beneficial to subsequent space target classification and identification.

Claims (1)

1. A space target size estimation method based on a deep neural network defines RCS sequence of a space target as { RCS1,rcs2,rcs3...,rcsNEquating the space target to an ellipsoid, and equating the size of the space target to an estimated ellipsoid short axis and long axis, characterized by comprising the following steps:
s1, and RCS sequence { RCS } of making the input layer of the deep neural network be a space target1,rcs2,rcs3...,rcsNThe output layer is the short axis and the long axis of the space target, i.e. { o }1,o2The learning rate is eta, and the excitation function is g (x); wherein the excitation function g (x) takes a sigmoid function, and the expression is as follows:
Figure FDA0002096091890000011
s2, calculating the output of each layer of the deep neural network:
aj 1=rcsj
aj l=g(∑ωjk laj l-1+bj l)
wherein, aj 1Represents the output value of the input layer, i.e. the original RCS sequence, a when 2. ltoreq. l.ltoreq.Mj lDenotes the lthJ in a layerthOutput value of neuron, omegajk lRepresenting the slave neural network (l-1)thK in the layerthNeuron to lthJ in a layerthConnection weight of neurons, bj lDenotes the lthJ in a layerthA bias of a neuron; the above formula is rewritten as a matrix representation:
al=g(zl)=g(Wlal-1+bl)
wherein z islDenotes the lthLinear output before layer is not activated;
s3, taking the mean square error of all output layer nodes of the deep neural network as an objective function, i.e. it is desirable to minimize the following formula for each sample:
Figure FDA0002096091890000012
and (3) optimizing by using a gradient descent algorithm, and calculating the error of an output layer:
σj M=aj M(1-aj M)(oj-aj M)
wherein σj MIs output layer jthError term of neuron, aj MRepresents output layer jthOutput value of neuron, ojIndicating that the sample corresponds to the output layer jththA target value of a neuron;
s4, calculating the errors of all nodes of the current layer, namely calculating the error terms of the next layer of nodes connected with the current layer, and then obtaining the expression of the hidden layer errors through an error back propagation algorithm, wherein the expression is as follows:
σj l=aj l(1-aj l)Wl+1σj l+1
wherein σj lFirstthJ in a layerthError term of neuron, aj lDenotes the lthJ in a layerthOutput value of neuron, Wl+1Denotes the lthLayer and the (l +1)thWeight matrix formed by the weights of the layers, σj l+1(l +1)thJ in a layerthAn error term for the neuron;
s5, after obtaining the error terms of all the nodes, updating the weight and the bias of each layer of the deep neural network according to the following expression:
Figure FDA0002096091890000021
Figure FDA0002096091890000022
and S6, after the algorithm is converged, obtaining parameters of the deep neural network, and estimating the space target size by using the deep neural network.
CN201910520018.7A 2019-06-17 2019-06-17 Space target size estimation method based on deep neural network Active CN110223342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910520018.7A CN110223342B (en) 2019-06-17 2019-06-17 Space target size estimation method based on deep neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910520018.7A CN110223342B (en) 2019-06-17 2019-06-17 Space target size estimation method based on deep neural network

Publications (2)

Publication Number Publication Date
CN110223342A CN110223342A (en) 2019-09-10
CN110223342B true CN110223342B (en) 2022-06-24

Family

ID=67817290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910520018.7A Active CN110223342B (en) 2019-06-17 2019-06-17 Space target size estimation method based on deep neural network

Country Status (1)

Country Link
CN (1) CN110223342B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111430903B (en) * 2020-04-01 2021-08-10 中国人民解放军空军工程大学 Radiation scattering integrated low-RCS antenna housing and design method thereof
CN111859784B (en) * 2020-06-24 2023-02-24 天津大学 RCS time series feature extraction method based on deep learning neural network
CN113281715B (en) * 2021-05-09 2022-06-21 复旦大学 Radar target characteristic data characterization method based on neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198195A (en) * 2002-12-17 2004-07-15 Kawasaki Heavy Ind Ltd Method and device for detecting subterranean object, and moving object
US9292792B1 (en) * 2012-09-27 2016-03-22 Lockheed Martin Corporation Classification systems and methods using convex hulls

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6337654B1 (en) * 1999-11-05 2002-01-08 Lockheed Martin Corporation A-scan ISAR classification system and method therefor
KR20180068578A (en) * 2016-12-14 2018-06-22 삼성전자주식회사 Electronic device and method for recognizing object by using a plurality of senses

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2004198195A (en) * 2002-12-17 2004-07-15 Kawasaki Heavy Ind Ltd Method and device for detecting subterranean object, and moving object
US9292792B1 (en) * 2012-09-27 2016-03-22 Lockheed Martin Corporation Classification systems and methods using convex hulls

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Classification of Ground Moving Radar Targets with RBF Neural Network;E.Notkin et.al;《8th International Conference on Pattern Recognition Applications and Methods》;20190221;全文 *
基于PSO-TDNN的空间目标识别;寇鹏等;《雷达科学与技术》;20101015(第05期);全文 *
基于深度神经网络的空间目标结构识别;周驰等;《中国空间科学技术》;20181119;全文 *

Also Published As

Publication number Publication date
CN110223342A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN109086700B (en) Radar one-dimensional range profile target identification method based on deep convolutional neural network
CN109993280B (en) Underwater sound source positioning method based on deep learning
CN108596327B (en) Seismic velocity spectrum artificial intelligence picking method based on deep learning
CN109597043B (en) Radar signal identification method based on quantum particle swarm convolutional neural network
CN110223342B (en) Space target size estimation method based on deep neural network
CN104459668B (en) radar target identification method based on deep learning network
CN107247259B (en) K distribution sea clutter shape parameter estimation method based on neural network
CN110427654B (en) Landslide prediction model construction method and system based on sensitive state
CN112966853B (en) Urban road network short-time traffic flow prediction method based on space-time residual mixed model
CN110456355B (en) Radar echo extrapolation method based on long-time and short-time memory and generation countermeasure network
CN110082738B (en) Radar target identification method based on Gaussian mixture and tensor recurrent neural network
CN114595732B (en) Radar radiation source sorting method based on depth clustering
CN107832789B (en) Feature weighting K nearest neighbor fault diagnosis method based on average influence value data transformation
CN108447057A (en) SAR image change detection based on conspicuousness and depth convolutional network
CN112965062B (en) Radar range profile target recognition method based on LSTM-DAM network
CN109255339B (en) Classification method based on self-adaptive deep forest human gait energy map
CN114169442A (en) Remote sensing image small sample scene classification method based on double prototype network
CN112966667A (en) Method for identifying one-dimensional distance image noise reduction convolution neural network of sea surface target
CN108983180A (en) A kind of high-precision radar sea clutter forecast system of colony intelligence
CN111832404B (en) Small sample remote sensing ground feature classification method and system based on feature generation network
CN109871907B (en) Radar target high-resolution range profile identification method based on SAE-HMM model
CN108983187B (en) Online radar target identification method based on EWC
CN112766381A (en) Attribute-guided SAR image generation method under limited sample
CN115598714B (en) Time-space coupling neural network-based ground penetrating radar electromagnetic wave impedance inversion method
CN114814776B (en) PD radar target detection method based on graph attention network and transfer learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant