CN113589272A - Automatic generation method for target tracking equipment on-duty log - Google Patents

Automatic generation method for target tracking equipment on-duty log Download PDF

Info

Publication number
CN113589272A
CN113589272A CN202110757029.4A CN202110757029A CN113589272A CN 113589272 A CN113589272 A CN 113589272A CN 202110757029 A CN202110757029 A CN 202110757029A CN 113589272 A CN113589272 A CN 113589272A
Authority
CN
China
Prior art keywords
target
maneuver
steps
time
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110757029.4A
Other languages
Chinese (zh)
Inventor
郭剑辉
刘帆
徐如峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Jibang Intelligent Technology Co ltd
Original Assignee
Jiangsu Jibang Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Jibang Intelligent Technology Co ltd filed Critical Jiangsu Jibang Intelligent Technology Co ltd
Priority to CN202110757029.4A priority Critical patent/CN113589272A/en
Publication of CN113589272A publication Critical patent/CN113589272A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • G01S13/726Multiple target tracking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/89Radar or analogous systems specially adapted for specific applications for mapping or imaging
    • G01S13/90Radar or analogous systems specially adapted for specific applications for mapping or imaging using synthetic aperture techniques, e.g. synthetic aperture radar [SAR] techniques
    • G01S13/9094Theoretical aspects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/41Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section
    • G01S7/417Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00 using analysis of echo signal for target characterisation; Target signature; Target cross-section involving the use of neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Electromagnetism (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

The invention discloses an automatic generation method of an on-duty log of target tracking equipment, belonging to the technical field of tracking equipment intellectualization, and the qualitative evaluation method of the guarantee resource of complex equipment comprises the following steps: s1, extracting a target state, S2, generating a state sequence by a time sliding window, and S3, and judging the target threat degree. The system develops desktop software based on Qt; the algorithm adopts a sliding time window to extract characteristic point traces with time correlation according to radar target detection data such as distance, azimuth, elevation angle, time and the like, and after differentiation among characteristic attributes is eliminated, a point keeper needs to collect and record data of a tracking target of tracking equipment such as radar and the like according to the quantity of the extracted characteristic point traces, and should record time points such as starting, maneuvering, height reduction, steering, disappearance and the like of the target all the time, and a log file and a report are generated regularly.

Description

Automatic generation method for target tracking equipment on-duty log
Technical Field
The invention relates to the technical field of tracking equipment intellectualization, in particular to an automatic generation method of an on-duty log of target tracking equipment.
Background
The target tracking and state recognition judging log intelligent generation technology develops very hot in recent years and is applied to various civil and military fields. The traditional manual distinguishing and recording mode needs to spend a large amount of human resources, and in the face of increasing data volume and emergencies, the manual mode is increasingly refined and fussy, and is easy to bring human negligence and omission. Therefore, it is an effective way to solve the above problems to establish a management system with the functions of target state tracking, state identification and judgment and intelligent generation of on-duty logs.
The technology for identifying and judging the state or type of the target, which is mature for many years, includes the following types:
(1) target state identification with echo rise and fall and modulation spectrum characteristics
The technology mostly depends on radar time domain one-dimensional target echo waveform to obtain data of included target features in a waveform system, so that target classification is realized. In the method of the target echo rise and fall, to identify the radar target, data detected by the radar with a low resolution is required to be used as a trace point target, and in the detection process, the phase and amplitude of the target echo are changed according to the adjustment of the specific attitude of the radar corresponding to the target. The moving state of the target echo is judged according to the change conditions of the phase and the amplitude of the target echo, and the flying state of the target can be judged by further extracting characteristic data. For the identification of modulation spectrum characteristics, the principle that rotating parts of the target have modulation effect on the echo is utilized, for example, the reciprocating motion of aircrafts such as rotating blades of jet engines, rotors of airplanes and propellers of helicopters, and the periodic modulation on the echo is formed. And judging the target by using a K nearest neighbor classification method.
(2) Method for realizing target state recognition based on information fusion
When the evidence theory in the information fusion faces the problems of complex target types and similar features between targets, inaccurate, uncertain and incomplete information in target recognition can be efficiently analyzed and processed. For example, a target identification algorithm based on an information fusion evidence theory is provided by combining a DSmT and D-S multivariate information fusion rule in information fusion with a neighbor analysis algorithm by researchers. The algorithm can effectively process uncertain information in the process of judging the target state and fully uses multi-domain composite characteristics of the target to be identified.
(3) Target state identification based on high-resolution radar imaging characteristics
The method comprises the steps of utilizing a high-resolution radar or an inverse synthetic aperture radar and a synthetic aperture radar to enable a target to carry out range imaging, further forming a two-dimensional radar target image, obtaining posture and form data of the target, and then identifying the state of the target through an image identification technology. Researchers provide an improved Adaptive Evolution Particle Swarm (AEPSO) algorithm to optimize parameters of a Support Vector Machine (SVM), establish a state classification recognition model, improve the accuracy of the high-resolution radar for target state recognition by strengthening the nonlinear change process of particle optimization, and have strong robustness. Researchers also use a least square method combined with an SVM radar target state automatic discrimination method to classify the characteristics of the complex high-resolution radar range profile by using the least square method.
(4) Target state recognition based on decision tree model in statistical method
By constructing nodes classified by different attributes, measuring the correlation among different characteristic attributes after continuously performing characteristic selection, and finally establishing a regression model for target identification. Such as a system for classification and identification of radar RCS targets using a decision tree algorithm. Or based on the target classification method of the decision tree multi-classification support vector machine, and combining the decision tree algorithm to optimize the support vector machine of the binary classification to be solved into a multi-class classifier.
(5) Target state recognition based on deep learning technology
With the increasing development of deep learning technology in recent years, the method also enters the field of radar target state judgment. Researchers research the application of the convolutional neural network in the field of target state discrimination, and a probability generation model is developed through Convolutional Factor Analysis (CFA) and is suitable for statistical discrimination with limited training data. A target identification method of a deep belief network based on a radar high-resolution range profile is also provided, data cleaning is carried out by utilizing t-distribution random neighbor embedding, and HRRO target data are balanced. In addition, researchers develop a perception circulation attention network for radar target automatic identification, and a circulation neural network and an attention mechanism of main attack time sequence data are combined, so that the identification success rate is enhanced, and the correlation between target motion and time is noticed.
However, none of the above studies based on deep learning utilize point trace spatiotemporal information such as distance, azimuth, elevation, time, etc. where the radar detects the target. Some researchers construct a deep learning model based on a convolutional neural network to preprocess and extract the point trace data of distance, direction and the like so as to realize the discrimination of the target track type, but only use the information of two-dimensional space-time data, and cannot discriminate the altitude change and the like of a flight target, and an Adam optimizer is adopted, so that the gradient information in a window range can be selected in the face of a fixed window, and the gradient information outside the window is ignored, so that the convergence cannot be optimal.
Disclosure of Invention
The invention aims to provide an automatic generation method of an on-duty log of target tracking equipment. The system develops desktop software based on Qt; important functions such as radar data storage, data playback analysis and automatic generation of task reports are realized by using Qt, so that the on-duty process of a radar station is clear, intelligent and accurate; a classification recognition algorithm aiming at automatic discrimination of target states is provided, the algorithm extracts characteristic point tracks with time correlation by adopting a sliding time window according to radar target detection data such as distance, direction, elevation angle, time and the like, eliminates the difference among characteristic attributes, respectively identifies the target maneuvering state and the target track type according to the quantity of the extracted characteristic point tracks, adopts a convolution neural network model based on Adamax and optimized for identifying the target states by identifying the track type, can more quickly update the model weight compared with the traditional stochastic gradient descent method, further improves the convergence and experience performance of the model, and compares a mainstream target recognition algorithm by experiments, the model has higher accuracy and recognition rate, and watchmen need to collect and record the data of tracking targets of tracking equipment such as radar and the like, the starting, maneuvering, descending, turning, disappearing and other time points of the target are recorded at the moment, and log files and reports are generated at regular time.
In order to achieve the above effects, the present invention provides the following technical solutions: a method for automatically generating an on-duty log of target tracking equipment comprises the following steps:
s1, extracting a target state;
s2, generating a state sequence by a time sliding window;
s3, judging the threat degree of the target;
s4, calculating the position, the speed, the acceleration and the heading difference;
s5, judging the target maneuver;
and S6, generating a log file.
Further, according to the operation step in S1, the target states mainly include a straight maneuver, a turning maneuver, a diving maneuver, and a pitch maneuver, the target maneuver state in a short time can be identified by calculating changes of a small number of adjacent feature points, and the straight maneuver and the turning maneuver mainly determine the slope of the direction of the maneuver.
Further, according to the operation step in S4, the pitch and yaw maneuvers are focused on the height change by the following calculation: the distance r, the azimuth a and the elevation e of the characteristic point are known, and the three-dimensional coordinate system is converted according to the spherical coordinate system, so that the following results are obtained:
Figure RE-GDA0003285330800000051
further, according to the operation step in S4, the slope on the horizontal plane is calculated as follows:
Figure RE-GDA0003285330800000052
further, according to the operation procedure in S1, since there is a certain error in radar data collection and it is difficult to perform a complete linear maneuver for the flight target due to human manipulation and atmospheric air flow, when performing the identification, it is necessary to relax the threshold of whether the target turns, and k is known from the trigonometric functionjiWhere θ is the angle between the slope line and the x-axis, the recognition threshold can be set to kjiTan phi and is kji-tanφ≤ k(j+1)j≤kji+tanφ。
Further, according to the operation step in S4, the pitch and dive maneuver identifies the altitude change Δ HjiThe calculation formula of (a) is as follows: Δ Hji=zj-zi(i,j=1,2,3…、j>i) If Δ Hji>0, the target performs the upward pitching maneuver; if Δ Hji<0, it indicates that the target is performing a dive maneuver, and also, for objective reasons, a decision threshold, i.e., | Δ H, should be setji|≤H。
Further, according to the operation step in S4, H is a set altitude change threshold value, and if the altitude change does not exceed the threshold value, no dive or tilt-up maneuver occurs, and thus, the target maneuver state can be identified by calculating the relative position change of a small number of feature points.
Further, according to the operation procedure in S5, it is preferable to perform the convolution and pooling operations twice, so the network structure model of convolution layer 1, pooling layer 1, convolution layer 2, pooling layer 2, and full connection layer is used herein.
Further, according to the operation steps in S5, after the convolutional neural network structure, model compilation is required, a loss function is selected, and an optimizer is required, wherein the loss function is used for measuring the prediction capability of the model and defining the gap between the prediction result and the actual result of the model, so that the obtained result is obtainedThe smaller the cross entropy value is, the more similar the probability distribution of the two results is, and aiming at the classification regression model of various track types constructed by the method, a cross entropy loss function is selected and defined as follows
Figure RE-GDA0003285330800000061
Wherein x is expressed as sample data, y is expressed as a model actual classification recognition result, a is expressed as a prediction result, n is expressed as the number of sample data, an optimizer corrects parameter values output by the model training and model in a gradient descent mode, a loss function is optimized, Adam is selected by the optimizer of most of the current classification recognition models, and the Adam calculation formula is as follows: m ist=μ*mt-1+(1-μ)*gt
Figure RE-GDA0003285330800000062
Figure RE-GDA0003285330800000063
Further, according to the operation step in S5, mtExpressed as the mean value of the exponential decay at time t, ntExpressed as the mean square exponential decay, g, at time ttWhich represents the gradient at the time t,
Figure RE-GDA0003285330800000064
and
Figure RE-GDA0003285330800000065
respectively, representing corresponding deviation corrections, Δ θtExpressed as parameter variation values, μ, v, and e are given as values of the hyperparametric parameter, and are set to 0.9, 0.999, and 10, respectively-8And eta is expressed as a learning rate and is set to be 0.002, but Adam has a certain problem in convergence, so the invention adopts an Adamax algorithm improved from Adam, adds an infinite norm (infinite norm) on the basis of Adam, provides a range change formula for the learning rate, and improves the problem of poor convergence of an adaptive learning algorithm, and the specific improvement is as follows: n ist=max(v*nt-1,|gt|),
Figure RE-GDA0003285330800000071
The invention provides an automatic generation method of an on-duty log of target tracking equipment, which has the following beneficial effects:
in fact, the system performs desktop software development based on Qt; important functions such as radar data storage, data playback analysis and automatic generation of task reports are realized by using Qt, so that the on-duty process of a radar station is clear, intelligent and accurate; a classification recognition algorithm aiming at automatic discrimination of target states is provided, the algorithm extracts characteristic point tracks with time correlation by adopting a sliding time window according to radar target detection data such as distance, direction, elevation angle, time and the like, eliminates the difference among characteristic attributes, respectively identifies the target maneuvering state and the target track type according to the quantity of the extracted characteristic point tracks, adopts a convolution neural network model based on Adamax and optimized for identifying the target states by identifying the track type, can more quickly update the model weight compared with the traditional stochastic gradient descent method, further improves the convergence and experience performance of the model, and compares a mainstream target recognition algorithm by experiments, the model has higher accuracy and recognition rate, and watchmen need to collect and record the data of tracking targets of tracking equipment such as radar and the like, the starting, maneuvering, descending, turning, disappearing and other time points of the target are recorded at the moment, and log files and reports are generated at regular time.
Drawings
FIG. 1 is a schematic diagram of the evaluation method of the present invention;
FIG. 2 is a diagram illustrating a structure of an objective state automatic discrimination algorithm according to the present invention.
Detailed Description
The invention provides a technical scheme that: referring to fig. 1-2, a method for automatically generating a target tracking device on-duty log includes the following steps:
s1, extracting a target state;
s2, generating a state sequence by a time sliding window;
s3, judging the threat degree of the target;
s4, calculating the position, the speed, the acceleration and the heading difference;
s5, judging the target maneuver;
and S6, generating a log file.
Specifically, according to the operation step in S1, the target states mainly include a straight maneuver, a turning maneuver, a diving maneuver, and a pitching maneuver, the target maneuver state in a short time can be identified by calculating changes of a small number of adjacent feature points, and the straight maneuver and the turning maneuver mainly determine the slope of the direction of the maneuver.
Specifically, according to the operation step in S4, the pitch and yaw maneuvers are focused on the height change in the following manner: the distance r, the azimuth a and the elevation e of the characteristic point are known, and the three-dimensional coordinate system is converted according to the spherical coordinate system, so that the following results are obtained:
Figure RE-GDA0003285330800000081
specifically, according to the operation step in S4, the slope on the horizontal plane is calculated as follows:
Figure RE-GDA0003285330800000082
Figure RE-GDA0003285330800000083
specifically, according to the operation step in S1, since there is a certain error in radar data acquisition and it is difficult to perform a complete linear maneuver for a flying target due to human manipulation and atmospheric airflow, when performing recognition, it is necessary to relax the threshold of whether the target turns, and it can be known from the trigonometric function that k is kjiWhere θ is the angle between the slope line and the x-axis, the recognition threshold can be set to kjiTan phi and is kji-tanφ≤k(j+1)j≤kji+ tanφ。
Specifically, according to the operation step in S4, the dive and pitch maneuver identifies the altitude change Δ HjiThe calculation formula of (a) is as follows: Δ Hji=zj-zi(i,j=1,2,3…、j>i) If Δ Hji>0, the target performs the upward pitching maneuver; if Δ Hji<0, it indicates that the target is performing a dive maneuver, and also, for objective reasons, a decision threshold, i.e., | Δ H, should be setji|≤H。
Specifically, according to the operation step in S4, H is a set altitude change threshold, and if the altitude change does not exceed this threshold, no dive or tilt-up maneuver occurs, and thus, the target maneuver state can be identified by calculating the relative position change of a small number of feature points.
Specifically, according to the operation steps in S5, it is the best solution to perform convolution and pooling operations twice, so the network structure model of convolution layer 1, pooling layer 1, convolution layer 2, pooling layer 2, and full connection layer is used herein.
Specifically, according to the operation step in S5, after the neural network structure is convolved, model compilation is performed, a loss function and an optimizer are selected, the loss function is used to measure the prediction capability of the model and to define the difference between the prediction result and the actual result of the model, so that the smaller the cross entropy value is, the more similar the distribution of the probabilities of the two results is, and the cross entropy loss function is selected and defined as follows for the classification regression model of multiple track types constructed herein
Figure RE-GDA0003285330800000091
Wherein x is expressed as sample data, y is expressed as a model actual classification recognition result, a is expressed as a prediction result, n is expressed as the number of sample data, an optimizer corrects parameter values output by the model training and model in a gradient descent mode, a loss function is optimized, Adam is selected by the optimizer of most of the current classification recognition models, and the Adam calculation formula is as follows: m ist=μ*mt-1+(1-μ)*gt
Figure RE-GDA0003285330800000101
Figure RE-GDA0003285330800000102
Specifically, according to the operation step in S5, mtExpressed as the mean value of the exponential decay at time t, ntExpressed as the mean square exponential decay, g, at time ttWhich represents the gradient at the time t,
Figure RE-GDA0003285330800000103
and
Figure RE-GDA0003285330800000104
respectively, representing corresponding deviation corrections, Δ θtExpressed as parameter variation values, mu, upsilon and epsilon are super parameter values and are respectively set to be 0.9, 0.999 and 10-8And eta is expressed as a learning rate and is set to be 0.002, but Adam has a certain problem in convergence, so the invention adopts an Adamax algorithm improved from Adam, adds an infinite norm (infinite norm) on the basis of Adam, provides a range change formula for the learning rate, and improves the problem of poor convergence of an adaptive learning algorithm, and the specific improvement is as follows: n ist=max(v*nt-1,|gt|),
Figure RE-GDA0003285330800000105
According to the table data, in the embodiment, the relevance of each aspect can be quickly evaluated and calculated by the automatic generation method of the target tracking equipment on-duty log.
The invention provides an automatic generation method of an on-duty log of target tracking equipment, which comprises the following steps: the method comprises the steps of firstly, extracting target states, wherein the target states mainly comprise straight line maneuvering, turning maneuvering, diving maneuvering and pitching maneuvering, the target maneuvering state in a short time can be identified by calculating the change of a small number of adjacent characteristic points, the straight line maneuvering and the turning maneuvering mainly judge the slope of the maneuvering direction, secondly, generating a state sequence by a time sliding window, thirdly, judging the target threat degree, fourthly, calculating the position, the speed, the acceleration and the course differenceThe mechanical emphasis of diving and pitching is on height change, and the calculation mode is as follows: the distance r, the azimuth a and the elevation e of the characteristic point are known, and the three-dimensional coordinate system is converted according to the spherical coordinate system, so that the following results are obtained:
Figure RE-GDA0003285330800000111
the slope on the horizontal plane is calculated as follows:
Figure RE-GDA0003285330800000112
Figure RE-GDA0003285330800000113
because radar data acquisition has certain error, and flight targets are difficult to carry out complete linear maneuver due to artificial control and atmospheric airflow, whether the threshold value of the target is turned needs to be widened during identification, and k can be known according to a trigonometric functionjiWhere θ is the angle between the slope line and the x-axis, the recognition threshold can be set to kjiTan phi and is kji-tanφ≤k(j+1)j≤kji+tanφ,ΔHji=zj-zi(i,j= 1,2,3…、j>i) If Δ Hji>0, the target performs the upward pitching maneuver; if Δ Hji<0, it indicates that the target is performing a dive maneuver, and also, for objective reasons, a decision threshold, i.e., | Δ H, should be setjiH is equal to or less than H, H is a set height change threshold, no diving or pitching maneuver occurs if the height change does not exceed the threshold, and therefore, a target maneuver state can be identified after the relative position change of the target maneuver state is calculated through a small number of feature points, step five, the target maneuver is judged, and according to the operation steps in the step five, it is the best scheme to perform convolution and pooling operations twice respectively, so that a network structure model of a convolutional layer 1, a pooling layer 1, a convolutional layer 2, a pooling layer 2 and a full connection layer is adopted in the text, model compilation is needed after a convolutional neural network structure, a loss function and an optimizer are selected, the loss function is used for measuring the prediction capability of the model and is used for defining the difference between the prediction result of the model and the actual result, so that the smaller the cross entropy value is, the smaller the difference between the two results isThe more similar the probability distribution, the cross-entropy loss function is selected for the classification regression model of the various track types constructed herein, which is defined as follows
Figure RE-GDA0003285330800000114
Figure RE-GDA0003285330800000115
Wherein x is expressed as sample data, y is expressed as a model actual classification recognition result, a is expressed as a prediction result, n is expressed as the number of sample data, an optimizer corrects parameter values output by the model training and model in a gradient descent mode, a loss function is optimized, Adam is selected by the optimizer of most of the current classification recognition models, and the Adam calculation formula is as follows: m ist=μ*mt-1+(1-μ)*gt
Figure RE-GDA0003285330800000121
Figure RE-GDA0003285330800000122
mtExpressed as the mean value of the exponential decay at time t, ntExpressed as the mean square exponential decay, g, at time ttWhich represents the gradient at the time t,
Figure RE-GDA0003285330800000123
and
Figure RE-GDA0003285330800000124
respectively, representing corresponding deviation corrections, Δ θtExpressed as parameter variation values, μ, v, and e are given as values of the hyperparametric parameter, and are set to 0.9, 0.999, and 10, respectively-8And eta is expressed as a learning rate and is set to be 0.002, but Adam has a certain problem in convergence, so the invention adopts an Adamax algorithm improved from Adam, adds an infinite norm (infinite norm) on the basis of Adam, provides a range change formula for the learning rate, and improves the problem of poor convergence of an adaptive learning algorithm, and the specific improvement is as follows: n ist=max(v*nt-1,|gt|),
Figure RE-GDA0003285330800000125
Generating a log file and a system integral interface, wherein the system combines a radar data management system with automatic radar target state discrimination, has the functions of automatic radar target state discrimination, rapid data storage, timely playback, automatic generation of an on-duty log and the like, reduces the complex workload and artificial errors of on-duty personnel, and enables the on-duty process to be clear, accurate and intelligent, and the research target state automatic identification method has important significance for the situation of instantaneous change in a battlefield and the assistance of war decision; meanwhile, the system can be used as an acquisition system of experimental data of the aircraft, and researchers can reduce the experiment times through system playback in the process of learning and analyzing the performance of the aircraft, so that the cost of repeated research is saved.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (10)

1. A method for automatically generating an on-duty log of target tracking equipment is characterized by comprising the following steps:
s1, extracting a target state;
s2, generating a state sequence by a time sliding window;
s3, judging the threat degree of the target;
s4, calculating the position, the speed, the acceleration and the heading difference;
s5, judging the target maneuver;
and S6, generating a log file.
2. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation steps in the S1, the target states mainly comprise a straight line maneuver, a turning maneuver, a diving maneuver and a pitching maneuver, the target maneuver state in a short time can be identified by calculating the change of a small number of adjacent feature points, and the straight line maneuver and the turning maneuver mainly judge the slope of the direction of the maneuver.
3. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation step in S4, the dive and pitch maneuvers are focused on the height change by the following calculation: the distance r, the azimuth a and the elevation e of the characteristic point are known, and the three-dimensional coordinate system is converted according to the spherical coordinate system, so that the following results are obtained:
Figure FDA0003148117010000011
4. the method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation step in S4, the slope on the horizontal plane is calculated as follows:
Figure FDA0003148117010000012
5. the method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation step in S1, because the radar data collection has certain errors and the flight target is difficult to perform complete linear maneuver due to artificial control and atmospheric air current, when the flight target is identified, the threshold value of whether the target turns needs to be widened, and k can be known according to the trigonometric functionjiWhere θ is the angle between the slope line and the x-axis, the recognition threshold can be set to kjiTan phi and is kji-tanφ≤k(j+1)j≤kji+tanφ。
6. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation steps in S4Maneuver to identify altitude changes Δ H for sudden, dive, and pitchjiThe calculation formula of (a) is as follows: Δ Hji=zj-zi(i,j=1,2,3...、j>i) If Δ Hji>0, the target performs the upward pitching maneuver; if Δ Hji<0, it indicates that the target is performing a dive maneuver, and also, for objective reasons, a decision threshold, i.e., | Δ H, should be setji|≤H。
7. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation step in S4, H is a set altitude change threshold, and if the altitude change does not exceed the threshold, no dive or tilt-up maneuver occurs, and thus, the target maneuver state can be identified by calculating the relative position change of a small number of feature points.
8. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation steps in S5, it is the best solution to perform the convolution and pooling operations twice, so the network structure model of convolution layer 1, pooling layer 1, convolution layer 2, pooling layer 2, and full connection layer is used herein.
9. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation steps in S5, after the convolutional neural network structure, model compiling is required, a loss function and an optimizer are selected, the loss function is used for measuring the prediction capability of the model and is used for defining the difference between the prediction result and the actual result of the model, so that the smaller the cross entropy value is, the more similar the probability distribution of the two results is, and the cross entropy loss function is selected and defined as the following for the classification regression model of multiple track types constructed in the text
Figure FDA0003148117010000031
Wherein, x isShowing as sample data, y as a model actual classification recognition result, a as a prediction result, n as the number of sample data, correcting parameter values of model training and model output by an optimizer in a gradient descent mode, and optimizing a loss function, wherein Adam is the optimizer choice of most classification recognition models at present, and the Adam calculation formula is as follows: m ist=μ*mt-1+(1-μ)*gt
Figure FDA0003148117010000032
Figure FDA0003148117010000033
10. The method for automatically generating the target tracking device on-duty log as claimed in claim 1, comprising the steps of: according to the operation procedure in S5, mtExpressed as the mean value of the exponential decay at time t, ntExpressed as the mean square exponential decay, g, at time ttWhich represents the gradient at the time t,
Figure FDA0003148117010000034
and
Figure FDA0003148117010000035
respectively, representing corresponding deviation corrections, Δ θtExpressed as parameter variation values, μ, v, and e are given as values of the hyperparametric parameter, and are set to 0.9, 0.999, and 10, respectively-8And eta is expressed as a learning rate and is set to be 0.002, but Adam has a certain problem in convergence, so the invention adopts an Adamax algorithm improved from Adam, adds an infinite norm (infinite norm) on the basis of Adam, provides a range change formula for the learning rate, and improves the problem of poor convergence of an adaptive learning algorithm, and the specific improvement is as follows: n ist=max(ν*nt-1,|gt|),
Figure FDA0003148117010000041
CN202110757029.4A 2021-07-05 2021-07-05 Automatic generation method for target tracking equipment on-duty log Withdrawn CN113589272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110757029.4A CN113589272A (en) 2021-07-05 2021-07-05 Automatic generation method for target tracking equipment on-duty log

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110757029.4A CN113589272A (en) 2021-07-05 2021-07-05 Automatic generation method for target tracking equipment on-duty log

Publications (1)

Publication Number Publication Date
CN113589272A true CN113589272A (en) 2021-11-02

Family

ID=78245924

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110757029.4A Withdrawn CN113589272A (en) 2021-07-05 2021-07-05 Automatic generation method for target tracking equipment on-duty log

Country Status (1)

Country Link
CN (1) CN113589272A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115096315A (en) * 2022-06-07 2022-09-23 哈尔滨工业大学 Spacecraft target maneuvering detection method aiming at sparse data
CN116299400A (en) * 2023-05-23 2023-06-23 中国兵器科学研究院 Floating platform position adjustment method and device
CN117572376A (en) * 2024-01-16 2024-02-20 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115096315A (en) * 2022-06-07 2022-09-23 哈尔滨工业大学 Spacecraft target maneuvering detection method aiming at sparse data
CN116299400A (en) * 2023-05-23 2023-06-23 中国兵器科学研究院 Floating platform position adjustment method and device
CN116299400B (en) * 2023-05-23 2023-08-15 中国兵器科学研究院 Floating platform position adjustment method and device
CN117572376A (en) * 2024-01-16 2024-02-20 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method
CN117572376B (en) * 2024-01-16 2024-04-19 烟台大学 Low signal-to-noise ratio weak and small target radar echo signal recognition device and training recognition method

Similar Documents

Publication Publication Date Title
CN114048889B (en) Aircraft trajectory prediction method based on long-term and short-term memory network
CN113589272A (en) Automatic generation method for target tracking equipment on-duty log
CN110569793B (en) Target tracking method for unsupervised similarity discrimination learning
CN102799900B (en) Target tracking method based on supporting online clustering in detection
Zhang et al. An intruder detection algorithm for vision based sense and avoid system
CN110018453A (en) Intelligent type recognition methods based on aircraft track feature
CN108596156A (en) A kind of intelligence SAR radar airbound target identifying systems
Xiao et al. Specific emitter identification of radar based on one dimensional convolution neural network
CN114818853B (en) Intention recognition method based on bidirectional gating circulating unit and conditional random field
Yang et al. Aircraft tracking based on fully conventional network and Kalman filter
CN111199243B (en) Aerial target identification method and system based on improved decision tree
Sun et al. Image target detection algorithm compression and pruning based on neural network
CN116954264B (en) Distributed high subsonic unmanned aerial vehicle cluster control system and method thereof
Li et al. A lightweight and explainable data-driven scheme for fault detection of aerospace sensors
Jia et al. Automatic target recognition system for unmanned aerial vehicle via backpropagation artificial neural network
CN113064133B (en) Sea surface small target feature detection method based on time-frequency domain depth network
Guo et al. A remote sensing ship recognition method of entropy-based hierarchical discriminant regression
CN113554072B (en) Flight action dividing method, computer readable storage medium and terminal equipment
CN114326821B (en) Unmanned aerial vehicle autonomous obstacle avoidance system and method based on deep reinforcement learning
CN115661576A (en) Method for identifying airplane group intention under sample imbalance
CN112698666B (en) Aircraft route optimization method based on meteorological grid
Pan et al. Identification of aircraft wake vortex based on VGGNet
Wu et al. Real-time compressive tracking with motion estimation
Jia et al. An application that uses machine learning algorithms to help drones distinguish between transport aircraft and fighter jets
CN114298183B (en) Intelligent recognition method for flight actions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20211102