CN115859056B - Unmanned aerial vehicle target detection method based on neural network - Google Patents

Unmanned aerial vehicle target detection method based on neural network Download PDF

Info

Publication number
CN115859056B
CN115859056B CN202211710125.4A CN202211710125A CN115859056B CN 115859056 B CN115859056 B CN 115859056B CN 202211710125 A CN202211710125 A CN 202211710125A CN 115859056 B CN115859056 B CN 115859056B
Authority
CN
China
Prior art keywords
matrix
unmanned aerial
aerial vehicle
detection
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211710125.4A
Other languages
Chinese (zh)
Other versions
CN115859056A (en
Inventor
扶明
何鑫
韩乃军
王山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huanuo Xingkong Technology Co ltd
Original Assignee
Huanuo Xingkong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huanuo Xingkong Technology Co ltd filed Critical Huanuo Xingkong Technology Co ltd
Priority to CN202211710125.4A priority Critical patent/CN115859056B/en
Publication of CN115859056A publication Critical patent/CN115859056A/en
Application granted granted Critical
Publication of CN115859056B publication Critical patent/CN115859056B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses an unmanned aerial vehicle target detection method based on a neural network, which comprises the following steps: offline training stage: constructing a training data set, constructing a neural network structure, and importing the training data set into the neural network to perform model training to form a detection model; the training data set comprises a plurality of marked sample data; the single sample data comprises a three-dimensional time domain matrixThree-dimensional time-frequency matrixThree-dimensional IMF matrixAnd labeling the same; and (3) online detection: performing actual detection by using the unmanned aerial vehicle detection model obtained through training; the signal is acquired in the actual detection processThree feature matrix sets are formed, and the three feature matrix sets are sent into a detection model to be analyzed and processed, and information of the unmanned aerial vehicle is output. The application has the advantages of simple principle, simple and convenient operation, wide application range and the like.

Description

Unmanned aerial vehicle target detection method based on neural network
Technical Field
The application mainly relates to the technical field of unmanned aerial vehicle detection, in particular to an unmanned aerial vehicle target detection method based on a neural network.
Background
In recent years, because of the characteristics of miniaturization and high mobility of the unmanned aerial vehicle in structure, the unmanned aerial vehicle has been applied to various fields, such as geographical mapping, emergency rescue, video shooting, etc., and with technological breakthroughs, the number of consumer-grade civil micro unmanned aerial vehicles is increasing. However, with the development of unmanned aerial vehicle technology and industry, unmanned aerial vehicle management and control is difficult, unmanned aerial vehicle 'black flight' events are frequent, and safety accidents and confidential leakage accidents caused by the unmanned aerial vehicle 'black flight' are countless. Therefore, the unmanned aerial vehicle is very urgent to effectively monitor and control.
As unmanned aerial vehicles have been applied to various fields, in some specific areas, such as mountain forest areas, urban areas, etc., are typical. Although the environment in mountain forest areas has less interference, the scene is wide, so that the unmanned aerial vehicle has weak signal when in a long distance; in urban areas, the interference is extremely large due to the complex electromagnetic environment, and the small civil unmanned aerial vehicle is very limited in investigation and identification effects by adopting the traditional radio monitoring technology. If wifi signals and unmanned aerial vehicle signals are mixed together, the traditional method can not be well resolved. In the environment with low signal-to-noise ratio, the traditional method of artificial feature engineering is difficult to extract the feature parameters of the signals correctly, so that the accuracy is not ideal, and a large amount of manpower and material resources are consumed, so that the method is not lost.
In addition, the existing method based on the neural network only performs classification and identification on the intercepted radio signals, but cannot mine information such as center frequency, bandwidth and the like of unmanned aerial vehicle signals in the radio signals so as to further process the signals, and cannot perform classification and identification on each unmanned aerial vehicle target well when a plurality of unmanned aerial vehicles exist at the same time, so that the method is limited in practical application.
The traditional method for doing characteristic engineering mainly comprises the steps of intercepting radio signals in a space, analyzing whether unmanned aerial vehicle signals exist in the intercepted radio signals, and mainly analyzing whether remote control signals or image transmission signals of the unmanned aerial vehicle exist. And judging whether the unmanned aerial vehicle belongs to the unmanned aerial vehicle or not by analyzing characteristic parameters such as frequency, bandwidth, frequency hopping period, signal duration width and the like of the signal, and simultaneously identifying the unmanned aerial vehicle belonging to the type by comparing the characteristic parameters with the signal characteristics of the unmanned aerial vehicle in the characteristic library. However, the use effect of the method in a complex electromagnetic environment such as an urban area is not ideal, and the signal of the unmanned aerial vehicle is mixed with wifi signals, noise and other interference due to low signal-to-noise ratio of the environment, and even the unmanned aerial vehicle is submerged in the signal, so that the signal characteristics are difficult to extract or the extracted characteristics are polluted by the noise, and the final detection accuracy is reduced. Meanwhile, the manually extracted characteristics cannot ensure that the unmanned aerial vehicle signal characteristics can be comprehensively, correctly and fully extracted, so that the detection accuracy is limited when the unmanned aerial vehicle signal characteristics are further improved, and the time and the labor are consumed for manually designing the characteristic parameters, and even the wrong characteristic parameters are brought. The common method based on the neural network can identify whether the unmanned aerial vehicle exists or not and the model of the unmanned aerial vehicle, but can not acquire the frequency, bandwidth and other information of the identified unmanned aerial vehicle signals, and can not detect the unmanned aerial vehicles respectively and simultaneously, so that practical application is limited.
Disclosure of Invention
The technical problem to be solved by the application is as follows: aiming at the technical problems in the prior art, the application provides the unmanned aerial vehicle target detection method based on the neural network, which has the advantages of simple principle, simplicity and convenience in operation and wide application range.
In order to solve the technical problems, the application adopts the following technical scheme:
an unmanned aerial vehicle target detection method based on a neural network, comprising:
offline training stage: constructing a training data set, constructing a neural network structure, and importing the training data set into the neural network to perform model training to form a detection model; the training data set comprises a plurality of marked sample data; the single sample data comprises a three-dimensional time domain matrixThree-dimensional time-frequency matrix->Three-dimensional IMF matrix->Labeling the completion;
and (3) online detection: performing actual detection by using the unmanned aerial vehicle detection model obtained through training; and in the actual detection process, the acquired signals form three feature matrix sets, and the three feature matrix sets are sent into a detection model for analysis and processing, so that unmanned aerial vehicle information is output.
As a further improvement of the process of the application: the offline training phase includes preparing a training data set: collecting signal raw data and preprocessing the raw data.
As a further improvement of the process of the application: the collecting signal raw data includes:
raw data of unmanned aerial vehicle image transmission signals: selecting different unmanned aerial vehicles to fly in a detection area of unmanned aerial vehicle detection equipment, and collecting original data of unmanned aerial vehicle image transmission signals through the detection equipment;
background environmental data: and arranging unmanned aerial vehicle detection equipment in different environments, and then repeating the collection of the original data of unmanned aerial vehicle image transmission signals to form background environment data.
As a further improvement of the process of the application: the preprocessing of the original data is to preprocess the collected signal original data, and the preprocessing comprises the following steps:
step S10: analyzing the original data and cleaning dirty data;
step S20: matrix P of two-dimensional time domain signals by using empirical mode decomposition T Performing iterative decomposition;
step S30: for two-dimensional time domain data matrix P T Performing K-point Fourier transform on the fast time dimension of the matrix, and performing H times, and converting the matrix fast time dimension from a time domain to a frequency domain to extract signal frequency domain characteristics; obtaining a KXH time-frequency matrixI.e. the time-frequency waterfall of the signal PTF;
step S40: obtaining a three-dimensional time domain matrixThree-dimensional time-frequency matrix->Three-dimensional IMF matrix->As a single input sample and labeling the single input sample, the single input sample comprises the actual existing position of the unmanned aerial vehicle, the model type and the frequency range during equipment detection.
As a further improvement of the process of the application: the step S20 includes:
step S201: solving a matrix P T Solving local maximum and minimum extreme points by two upper and lower envelopes of the fast time dimension;
step S202: connecting the maximum extreme point to form an upper envelope, connecting the minimum extreme point to form a lower envelope, and then adopting a three-sample interpolation method to perform envelope curve fitting; obtaining a matrix P T Averaging the upper envelope and the lower envelope of the first frame to obtain a mean envelope;
step S203: from matrix P T The fast time dimension and the average value envelope curve are differentiated to obtain an intermediate signal, and if the intermediate signal meets the condition that the absolute value difference between the zero crossing number and the extreme point number is smaller than or equal to 1 and the average value of the local maximum and minimum envelope curve is equal to 0, the intermediate signal is an eigenmode function vector; at the same time through matrix P T The fast time dimension and the IMF vector are differentiated and then the IMF vector is obtained continuously by continuing the steps; obtaining eigenmode function vector IMF of fast time dimension i Wherein i=1, 2, 3..n;
step S204: for matrix P T Decomposing all the fast time dimensions to obtain an IMF matrix P of (S×N) ×H IMF
As a further improvement of the process of the application: the step S40 includes:
the unmanned aerial vehicle detection equipment receives different directions of 0-360 degrees, and adopts X numbers ofThe detection channels are uniformly distributed in the range of 0-360 degrees to obtain X channels of original signal data at the same timeWherein ch=0, 1, 2..x; obtaining a two-dimensional time domain signal matrix of X channels at the same moment +.>Size S× H, IMF matrix->Size (S×N) ×H and time-frequency matrixThe size is K multiplied by H;
time-frequency matrixAccording to the principle of selecting non-adjacent channels, extracting N channels to form a three-dimensional matrix +.>The matrix size is NxKxH;
for IMF matrixSelecting and rectangular according to non-adjacent channels>Extracting N channels according to the principle of channel difference to obtain N (S multiplied by N) multiplied by H matrixes, and then carrying out weighted summation on IMFi vectors corresponding to the N matrixes according to a certain weight, wherein the weight coefficient is a n Finally, a three-dimensional matrix is obtained>The matrix size is n×s×h; wherein i=1, 2, 3..n, n=1, 2, 3..n;
matrix time domain signalsAccording to the AND time-frequency matrix->The principle of the same channel is that N channels are extracted to form a three-dimensional matrix +.>The matrix size is n×s×h.
As a further improvement of the process of the application: the neural network structure built in the off-line training stage is a single-stage detection neural network and comprises an input layer, a backbone network, a connection network and a detection head network; wherein:
the input layer is used for receiving data to be detected and comprises three branches, which are respectively IN 1 Corresponding three-dimensional time domain matrixIN 2 Corresponding three-dimensional time-frequency matrix->IN 3 Corresponding three-dimensional IMF matrix->Wherein IN 2 Connected to the backbone network, IN 1 And IN 3 The cross-domain connection is to a network of detection heads.
The main network adopts a convolutional neural network for extracting characteristics of input data;
the connection network adopts a feature pyramid network to carry out multi-scale fusion and multiplexing on the features extracted by the backbone network, and the detection head network is used for final feature learning and then outputting detection information and radio signal parameter information of an unmanned aerial vehicle target;
a detection head network directly connected to IN 1 And IN 3 Downsampling with an L-layer convolutional neural network to match IN 2 To the detecting head netThe sizes of the feature graphs at the time of the linking are consistent; weighting and multiplying the channel attention mechanism mode with the classification characteristic diagram of the detection head network, and extracting the signal time domain characteristics and the inherent modal characteristics of different time scales of the signal and IN 2 And fusing the characteristics extracted by the input main network.
As a further improvement of the process of the application: the training process by using the neural network in the offline training phase comprises the following steps:
sending the constructed training sample set, namely three feature matrix sets of signals, into a built neural network for training;
through fitting the actual data, inputting the constructed verification sample set into a neural network for verification after each training period is finished, and monitoring the accuracy of the network model in real time;
inputting the constructed test sample set into a neural network for testing after training is finished, and checking the performance of the network model;
and if the network model meets the expected requirement, obtaining a weight parameter, namely a detection model of the unmanned aerial vehicle target through conversion.
As a further improvement of the process of the application: the online detection stage comprises the following steps:
step S100: leading the unmanned aerial vehicle detection model obtained through training into actual detection equipment, and obtaining all radio signals in a detection range;
step S200: preprocessing the intercepted radio signals to obtain three feature matrix sets of the signals;
step S300: and sending the three feature matrix sets into a detection model for operation reasoning, outputting whether the unmanned aerial vehicle exists or not, and outputting the information of each unmanned aerial vehicle if the unmanned aerial vehicle exists.
As a further improvement of the process of the application: the step S200 includes:
the step S2001: intercepting, rearranging and multi-channel weighting original time domain signal data to obtain a three-dimensional time domain matrix
Step S2002: fast and slow time dimension time-frequency conversion is carried out on the two-dimensional time domain signal matrix by utilizing FFT, and a three-dimensional time-frequency matrix is obtained
Step S2003: extracting modal features of different time scales of signals from the two-dimensional time domain signal matrix by using empirical mode decomposition, and finally obtaining a three-dimensional IMF matrix
Compared with the prior art, the application has the advantages that:
1. the unmanned aerial vehicle target detection method based on the neural network is simple in principle, simple and convenient to operate and wide in application range, and aims at the problems that the traditional unmanned aerial vehicle identification method is difficult to extract features in a low signal-to-noise ratio environment, the detection accuracy is low due to incomplete feature extraction, and the like.
2. The unmanned aerial vehicle target detection method based on the neural network adopts the neural network technology, comprises a sample labeling method, a target detection neural network structure, an unmanned aerial vehicle end-to-end target detection scheme and other key technologies, can better extract unmanned aerial vehicle signal characteristics in a low signal-to-noise ratio environment or a weak signal environment, and improves detection accuracy.
3. The unmanned aerial vehicle target detection method based on the neural network, aiming at the problem that the traditional method for doing characteristic engineering cannot ensure that the signal characteristics of an unmanned aerial vehicle can be comprehensively, correctly and fully extracted, improves the detection performance, avoids the influence of errors caused by manual design characteristics on the final detection performance, and also avoids the consumption of manpower and material resources and saves resources through the method of time-frequency transformation, multi-mode decomposition and the neural network.
4. The unmanned aerial vehicle target detection method based on the neural network not only can realize the identification of whether unmanned aerial vehicles exist or not and the classification of the machine types of the intercepted radio signals, but also can automatically and directly acquire the information such as the center frequency, the bandwidth and the like of the signals of each unmanned aerial vehicle when a plurality of unmanned aerial vehicles exist. Through the off-line training and on-line detection scheme, information such as the model type and the radio parameters of the unmanned aerial vehicle target is finally output, and good detection under a severe environment is finally realized.
Drawings
FIG. 1 is a schematic diagram of the method of the present application.
Detailed Description
The application will be described in further detail with reference to the drawings and the specific examples.
As shown in fig. 1, the unmanned aerial vehicle target detection method based on the neural network of the application comprises the following steps:
offline training stage: constructing a training data set, constructing a neural network structure, and importing the training data set into the neural network to perform model training to form a detection model; the training data set comprises a plurality of marked sample data; the single sample data comprises a three-dimensional time domain matrixThree-dimensional time-frequency matrix->Three-dimensional IMF matrix->Labeling the completion;
and (3) online detection: performing actual detection by using the unmanned aerial vehicle detection model obtained through training; and in the actual detection process, the acquired signals form three feature matrix sets, and the three feature matrix sets are sent into a detection model for analysis and processing, so that unmanned aerial vehicle information is output.
In a specific application example, in an online detection stage, whether unmanned aerial vehicle information exists is judged through analysis and processing, if so, information such as the model, frequency, bandwidth, time width and the like of each unmanned aerial vehicle is output, and finally effective detection of a target is realized when multiple unmanned aerial vehicles possibly exist simultaneously under the environment of weak unmanned aerial vehicle signals or low signal-to-noise ratio.
In a specific application example, in an offline training phase, preparation of a training data set is first performed, including: collecting signal raw data and preprocessing the raw data.
The collected signal raw data may include, but is not limited to, the following:
raw data of unmanned aerial vehicle image transmission signals: in a detection area of unmanned aerial vehicle detection equipment, different unmanned aerial vehicles are selected to fly in different directions of 0-360 degrees and at different distances of 0-R kilometers, and original data of unmanned aerial vehicle image transmission signals are collected through the detection equipment.
Background environmental data: unmanned aerial vehicle detection equipment is arranged in different environments, including but not limited to typical scenes such as suburban fields, mountain areas, forests, urban high-rise buildings and the like, and then collection of original data of unmanned aerial vehicle image transmission signals is repeated to enrich background environment data.
The preprocessing of the raw data is to preprocess the collected signal raw data, which may specifically but not be limited to including:
step S10: analyzing the original data and cleaning dirty data;
signal data such as signal clutter, abnormal rise of background noise, abnormal unmanned aerial vehicle signals, transverse line strong interference of close-range remote control signals and the like caused by receiving equipment hardware are removed;
and cleaning useless data, and filtering signal data such as unmanned aerial vehicle signals are completely submerged due to too large interference, unmanned aerial vehicle flight distance exceeds a receiving device distance limit too far, unmanned aerial vehicle signals cannot be received, unmanned aerial vehicle signals are completely overlapped with wifi signals, unmanned aerial vehicle signals are completely overlapped with base station signals and the like.
Then the data after the cleaning is finished is arranged to obtain x t According to the sampling frequency f 0 The method comprises the steps of sampling total duration T and data magnitude M, setting fixed time-width step S for original data, carrying out sliding window interception, wherein the sliding window width is equal to the step S, rearranging intercepted H segmented signals according to time dimension to obtain a two-dimensional time domain signal matrix P of fast and slow time dimension T Wherein the width is S, and the height is H;
step S20: matrix P of two-dimensional time domain signals by using empirical mode decomposition T Performing iterative decomposition;
the detailed process comprises the following steps:
step S201: solving a matrix P T Solving local maximum and minimum extreme points by two upper and lower envelopes of the fast time dimension;
step S202: connecting the maximum extreme point to form an upper envelope, connecting the minimum extreme point to form a lower envelope, and then adopting a three-sample interpolation method to perform envelope curve fitting; obtaining a matrix P T Averaging the upper envelope and the lower envelope of the first frame to obtain a mean envelope;
step S203: from matrix P T The fast time dimension and the mean envelope are differentiated to obtain an intermediate signal, and if the intermediate signal meets the absolute value difference between the zero crossing number and the extreme point number is smaller than or equal to 1 and the mean value of the local maximum and minimum envelopes is equal to 0, the intermediate signal is an eigenmode function (Intrinsic Mode Function, IMF) vector; at the same time, can pass through the matrix P T The fast time dimension and the IMF vector are differentiated and then the IMF vector can be obtained continuously by continuing the steps; obtaining eigenmode function vector IMF of fast time dimension i (i=1,2,3...N);
Step S204: for matrix P T Decomposing all the fast time dimensions to obtain an IMF matrix P of (S×N) ×H IMF
Step S30: for two-dimensional time domain data matrix P T Performing Fourier transform of K points in the fast time dimension of (2), performing H times, converting the matrix fast time dimension from time domain to frequency domain to extract signal frequency domain characteristics, and finally obtaining KXH time-frequency matrix, i.e. time-frequency waterfall pattern P of signals TF
Step S40: obtaining a three-dimensional time domain matrixThree-dimensional time-frequency matrix->Three-dimensional IMF matrix->As a single input sample, the unmanned aerial vehicle is marked, and mainly comprises the actual existing position and model type of the unmanned aerial vehicle and the frequency range during equipment detection.
The unmanned plane detection equipment receives different directions of 0-360 degrees, so that X detection channels are uniformly distributed in the range of 0-360 degrees, and X channel original signal data at the same time can be obtained(ch=0, 1, 2..x). Thus, the two-dimensional time domain signal matrix of X channels at the same time can be further obtained by the above steps +.>Size S× H, IMF matrix->The size is (S×N) ×H, and the time-frequency matrix->The size is KXH.
Time-frequency matrixAccording to the principle of selecting non-adjacent channels, extracting N channels to form a three-dimensional matrix +.>The matrix size is NxKxH。
For IMF matrixSelecting and rectangular according to non-adjacent channels>Extracting N channels according to the principle of channel difference to obtain N (S×N) ×H matrixes, and then mapping the N matrixes to IMFs i (i=1, 2, 3..n.) vectors are weighted and summed according to a certain weight, the weight coefficient being a n (n=1, 2, 3..n.) and finally a three-dimensional matrix is obtained>The matrix size is n×s×h.
Matrix time domain signalsAccording to the AND time-frequency matrix->The principle of the same channel is that N channels are extracted to form a three-dimensional matrix +.>The matrix size is n×s×h.
In a specific application example, in an off-line training stage, the built neural network structure is a single-stage detection neural network and consists of four parts, including an input layer, a main network, a connection network and a detection head network.
The input layer is used for receiving data to be detected and comprises three branches, namely IN 1 Corresponding three-dimensional time domain matrixIN 2 Corresponding three-dimensional time-frequency matrix->IN 3 Corresponding three-dimensional IMF matrix->Wherein IN 2 Connected to the backbone network, IN 1 And IN 3 The cross-domain connection is to a network of detection heads.
The main network adopts a convolutional neural network for extracting characteristics of input data;
the connection network adopts a feature pyramid network to carry out multi-scale fusion and multiplexing on the features extracted by the main network, and the detection head network is used for final feature learning and then outputting detection information and radio signal parameter information of an unmanned aerial vehicle target;
wherein IN is present IN addition to the input of the backbone network characteristic data 1 And IN 3 Directly connected to the detection head network, and L-layer convolutional neural network is designed to downsample to make it match with IN 2 Finally, the feature images are weighted and multiplied with the classification feature images of the detection head network IN a channel attention mechanism mode, and the extracted signal time domain features and the inherent modal features of different time scales of the signals are multiplied with IN 2 And the characteristics extracted by the input main network are fused, so that the final detection precision is improved.
In a specific application example, in an offline training stage, the training process by using the neural network includes:
sending the constructed training sample set, namely three feature matrix sets of signals, into a built neural network for training;
through fitting the actual data, inputting the constructed verification sample set into a neural network for verification after each training period is finished, and monitoring the accuracy of the network model in real time;
after training, the constructed test sample set is input into a neural network for testing, and the performance of the network model, such as accuracy, recall, generalization and the like, is checked.
If the network model meets the expected requirement, a weight parameter, namely a detection model of the unmanned aerial vehicle target, can be obtained through conversion.
In a specific application example, the online detection stage includes the following specific procedures:
step S100: leading the unmanned aerial vehicle detection model obtained through training into actual detection equipment, and obtaining all radio signals in a detection range;
namely: using unmanned aerial vehicle detection equipment to monitor detection areas of open areas such as mountain areas or low signal-to-noise ratio environments such as urban areas, and intercepting all radio signals within a preset frequency range;
step S200: preprocessing the intercepted radio signals to obtain three feature matrix sets of the signals;
step S2001: intercepting, rearranging and multi-channel weighting original time domain signal data to obtain a three-dimensional time domain matrix
Step S2002: fast and slow time dimension time-frequency conversion is carried out on the two-dimensional time domain signal matrix by utilizing FFT, and a three-dimensional time-frequency matrix is obtained
Step S2003: extracting modal features of different time scales of signals from the two-dimensional time domain signal matrix by using empirical mode decomposition, and finally obtaining a three-dimensional IMF matrix
Step S300: and (3) sending the three feature matrix sets into a detection model for operation reasoning, outputting whether the unmanned aerial vehicle exists or not, and if so, outputting the information of the model, the frequency, the bandwidth, the time width and the like of each unmanned aerial vehicle, and finally solving the problem that when the unmanned aerial vehicle has weak signals or a low signal to noise ratio environment, multiple unmanned aerial vehicles possibly exist at the same time, effectively detecting the target.
Step S400: and (5) performing upper computer display or further processing on the detected signals.
According to the unmanned aerial vehicle target detection method based on the neural network, the most favorable characteristics for detection can be automatically found through automatic learning of actual data, excessive manual intervention is not needed, incomplete characteristics of manual design or errors caused by manual operation are avoided, manpower and material resources are saved, resources are released, and meanwhile good robust characteristics can be obtained when environmental interference is large.
Furthermore, the method and the device not only can realize the identification of the existence of the unmanned aerial vehicle and the classification of the model of the unmanned aerial vehicle on the intercepted radio signal, but also can automatically and directly acquire the information such as the center frequency, the bandwidth and the like of the unmanned aerial vehicle signal in the intercepted radio signal, and can also better detect the condition that a plurality of unmanned aerial vehicles exist at the same time, respectively acquire the information of each unmanned aerial vehicle signal, thereby being beneficial to practical application and improving the monitoring force of the unmanned aerial vehicle.
It will be appreciated by those skilled in the art that the above-described embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above is only a preferred embodiment of the present application, and the protection scope of the present application is not limited to the above examples, and all technical solutions belonging to the concept of the present application belong to the protection scope of the present application. It should be noted that modifications and adaptations to the application without departing from the principles thereof are intended to be within the scope of the application as set forth in the following claims.

Claims (10)

1. The unmanned aerial vehicle target detection method based on the neural network is characterized by comprising the following steps of:
offline training stage: constructing a training data set, constructing a neural network structure, and importing the training data set into the neural network to perform model training to form a detection model; the training data set comprises a plurality of marked sample data; the single sample data comprises a three-dimensional time domain matrixThree-dimensional time-frequency matrix->Three-dimensional IMF matrix->And labeling the same;
and (3) online detection: performing actual detection by using the unmanned aerial vehicle detection model obtained through training; and in the actual detection process, the acquired signals form three feature matrix sets, and the three feature matrix sets are sent into a detection model for analysis and processing, so that unmanned aerial vehicle information is output.
2. The unmanned aerial vehicle target detection method of claim 1, wherein the offline training phase comprises preparing a training dataset: collecting signal raw data and preprocessing the raw data.
3. The unmanned aerial vehicle target detection method of claim 2, wherein the collecting signal raw data comprises:
raw data of unmanned aerial vehicle image transmission signals: selecting different unmanned aerial vehicles to fly in a detection area of unmanned aerial vehicle detection equipment, and collecting original data of unmanned aerial vehicle image transmission signals through the detection equipment;
background environmental data: and arranging unmanned aerial vehicle detection equipment in different environments, and then repeating the collection of the original data of unmanned aerial vehicle image transmission signals to form background environment data.
4. The unmanned aerial vehicle target detection method based on the neural network according to claim 2, wherein the preprocessing of the raw data is preprocessing of the collected signal raw data, comprising:
step S10: analyzing the original data and cleaning dirty data;
step S20: matrix P of two-dimensional time domain signals by using empirical mode decomposition T Performing iterative decomposition;
step S30: for two-dimensional time domain data matrix P T Performing K-point Fourier transform on the fast time dimension of the matrix, and performing H times to convert the matrix fast time dimension from time domain to frequency domain to extract the signal frequency domain characteristicsSign of the disease; obtaining a KXH time-frequency matrix, i.e. a time-frequency waterfall of signals P TF
Step S40: obtaining a three-dimensional time domain matrixThree-dimensional time-frequency matrix->Three-dimensional IMF matrix->As a single input sample and labeling the single input sample, the single input sample comprises the actual existing position of the unmanned aerial vehicle, the model type and the frequency range during equipment detection.
5. The unmanned aerial vehicle target detection method based on the neural network according to claim 4, wherein the step S20 comprises:
step S201: solving a matrix P T Solving local maximum and minimum extreme points by two upper and lower envelopes of the fast time dimension;
step S202: connecting the maximum extreme point to form an upper envelope, connecting the minimum extreme point to form a lower envelope, and then adopting a three-sample interpolation method to perform envelope curve fitting; obtaining a matrix P T Averaging the upper envelope and the lower envelope of the first frame to obtain a mean envelope;
step S203: from matrix P T The fast time dimension and the average value envelope curve are differentiated to obtain an intermediate signal, and if the intermediate signal meets the condition that the absolute value difference between the zero crossing number and the extreme point number is smaller than or equal to 1 and the average value of the local maximum and minimum envelope curve is equal to 0, the intermediate signal is an eigenmode function vector; at the same time through matrix P T The fast time dimension and the IMF vector are differentiated and then the IMF vector is obtained continuously by continuing the steps; obtaining eigenmode function vector IMF of fast time dimension i Wherein i=1, 2, 3..n;
step S204: for matrix P T Decomposing all the fast time dimensions to obtain an IMF matrix P of (S×N) ×H IMF
6. The unmanned aerial vehicle target detection method based on the neural network according to claim 4, wherein the step S40 comprises:
the unmanned aerial vehicle detection equipment receives different directions of 0-360 degrees, adopts X detection channels to be uniformly distributed in the range of 0-360 degrees, and obtains X channel original signal data at the same timeWherein ch=0, 1, 2..x; obtaining a two-dimensional time domain signal matrix of X channels at the same moment +.>Size S× H, IMF matrix->The size is (S×N) ×H, and the time-frequency matrix->The size is K multiplied by H;
time-frequency matrixAccording to the principle of selecting non-adjacent channels, extracting N channels to form a three-dimensional matrix +.>The matrix size is NxKxH;
for IMF matrixSelecting and rectangular according to non-adjacent channels>Extracting N channels according to the principle of channel difference to obtain N (S×N) ×H matrixes, and then corresponding the N matrixesIMF of (a) i The vector is weighted and summed according to a certain weight, and the weight coefficient is a n Finally, a three-dimensional matrix is obtained>The matrix size is n×s×h; wherein i=1, 2, 3..n, n=1, 2, 3..n;
matrix time domain signalsAccording to the AND time-frequency matrix->The principle of the same channel extracts N channels to form a three-dimensional matrixThe matrix size is n×s×h.
7. The unmanned aerial vehicle target detection method based on the neural network according to any one of claims 1 to 6, wherein the neural network structure built in the off-line training stage is a single-stage detection neural network, and comprises an input layer, a backbone network, a connection network and a detection head network; wherein:
the input layer is used for receiving data to be detected and comprises three branches, which are respectively IN 1 Corresponding three-dimensional time domain matrixIN 2 Corresponding three-dimensional time-frequency matrix->IN 3 Corresponding three-dimensional IMF matrix->Wherein IN 2 Connected to the backbone network, IN 1 And IN 3 Trans-regional linkConnecting to a network of detection heads;
the main network adopts a convolutional neural network for extracting characteristics of input data;
the connection network adopts a feature pyramid network to carry out multi-scale fusion and multiplexing on the features extracted by the backbone network, and the detection head network is used for final feature learning and then outputting detection information and radio signal parameter information of an unmanned aerial vehicle target;
a detection head network directly connected to IN 1 And IN 3 Downsampling with an L-layer convolutional neural network to match IN 2 The feature diagram size is consistent when reaching the detection head network; weighting and multiplying the channel attention mechanism mode with the classification characteristic diagram of the detection head network, and extracting the signal time domain characteristics and the inherent modal characteristics of different time scales of the signal and IN 2 And fusing the characteristics extracted by the input main network.
8. The unmanned aerial vehicle target detection method of any of claims 1-6, wherein the training with the neural network in the off-line training phase comprises:
sending the constructed training sample set, namely three feature matrix sets of signals, into a built neural network for training;
through fitting the actual data, inputting the constructed verification sample set into a neural network for verification after each training period is finished, and monitoring the accuracy of the network model in real time;
inputting the constructed test sample set into a neural network for testing after training is finished, and checking the performance of the network model;
and if the network model meets the expected requirement, obtaining a weight parameter, namely a detection model of the unmanned aerial vehicle target through conversion.
9. The unmanned aerial vehicle target detection method based on a neural network according to any one of claims 1 to 6, wherein the online detection phase comprises:
step S100: leading the unmanned aerial vehicle detection model obtained through training into actual detection equipment, and obtaining all radio signals in a detection range;
step S200: preprocessing the intercepted radio signals to obtain three feature matrix sets of the signals;
step S300: and sending the three feature matrix sets into a detection model for operation reasoning, outputting whether the unmanned aerial vehicle exists or not, and outputting the information of each unmanned aerial vehicle if the unmanned aerial vehicle exists.
10. The unmanned aerial vehicle target detection method based on the neural network according to claim 9, wherein the step S200 comprises:
step S2001: intercepting, rearranging and multi-channel weighting original time domain signal data to obtain a three-dimensional time domain matrix
Step S2002: fast and slow time dimension time-frequency conversion is carried out on the two-dimensional time domain signal matrix by utilizing FFT, and a three-dimensional time-frequency matrix is obtained
Step S2003: extracting modal features of different time scales of signals from the two-dimensional time domain signal matrix by using empirical mode decomposition, and finally obtaining a three-dimensional IMF matrix
CN202211710125.4A 2022-12-29 2022-12-29 Unmanned aerial vehicle target detection method based on neural network Active CN115859056B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211710125.4A CN115859056B (en) 2022-12-29 2022-12-29 Unmanned aerial vehicle target detection method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211710125.4A CN115859056B (en) 2022-12-29 2022-12-29 Unmanned aerial vehicle target detection method based on neural network

Publications (2)

Publication Number Publication Date
CN115859056A CN115859056A (en) 2023-03-28
CN115859056B true CN115859056B (en) 2023-09-15

Family

ID=85655998

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211710125.4A Active CN115859056B (en) 2022-12-29 2022-12-29 Unmanned aerial vehicle target detection method based on neural network

Country Status (1)

Country Link
CN (1) CN115859056B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN109614930A (en) * 2018-12-11 2019-04-12 湖南华诺星空电子技术有限公司 A kind of unmanned plane spectrum detection method based on deep learning
CN111239676A (en) * 2019-12-10 2020-06-05 重庆邮电大学 Unmanned aerial vehicle detection and direction finding method based on software radio
AU2021107497A4 (en) * 2021-08-25 2021-12-23 K. Rajesh Babu An ofdm channel estimation and signal detection method based on deep learning
CN114677419A (en) * 2022-04-19 2022-06-28 杭州电子科技大学 Radar Doppler signal low-slow small target detection method based on three-dimensional convolution network

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10529241B2 (en) * 2017-01-23 2020-01-07 Digital Global Systems, Inc. Unmanned vehicle recognition and threat management

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009525A (en) * 2017-12-25 2018-05-08 北京航空航天大学 A kind of specific objective recognition methods over the ground of the unmanned plane based on convolutional neural networks
CN109614930A (en) * 2018-12-11 2019-04-12 湖南华诺星空电子技术有限公司 A kind of unmanned plane spectrum detection method based on deep learning
CN111239676A (en) * 2019-12-10 2020-06-05 重庆邮电大学 Unmanned aerial vehicle detection and direction finding method based on software radio
AU2021107497A4 (en) * 2021-08-25 2021-12-23 K. Rajesh Babu An ofdm channel estimation and signal detection method based on deep learning
CN114677419A (en) * 2022-04-19 2022-06-28 杭州电子科技大学 Radar Doppler signal low-slow small target detection method based on three-dimensional convolution network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Radar-based detection and identification for miniature air vehicles;Allistair Moses,et al;《2011 IEEE International Conference on Control Applications (CCA)》;933-940页 *
基于神经网络的飞行小目标识别方法研究;梁健涛;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;第2022年卷(第03期);C031-640 *

Also Published As

Publication number Publication date
CN115859056A (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN108846835B (en) Image change detection method based on depth separable convolutional network
CN108388927B (en) Small sample polarization SAR terrain classification method based on deep convolution twin network
CN105100789A (en) Method for evaluating video quality
CN104881865A (en) Forest disease and pest monitoring and early warning method and system based on unmanned plane image analysis
CN110427878B (en) Method and system for identifying rapid radio storm signals
CN111461037B (en) End-to-end gesture recognition method based on FMCW radar
CN103955926A (en) Method for remote sensing image change detection based on Semi-NMF
CN108562821B (en) Method and system for determining single-phase earth fault line selection of power distribution network based on Softmax
CN104408705A (en) Anomaly detection method of hyperspectral image
CN103065124B (en) A kind of cigarette detection method, device and fire detection device
CN111310700A (en) Intermediate frequency sampling sequence processing method for radiation source fingerprint feature identification
CN106772273A (en) A kind of SAR false targets disturbance restraining method and system based on dynamic aperture
CN111340831A (en) Point cloud edge detection method and device
CN115859056B (en) Unmanned aerial vehicle target detection method based on neural network
CN117076928A (en) Bridge health state monitoring method, device and system and electronic equipment
US10977772B1 (en) Unmanned aircraft system (UAS) detection and assessment via temporal intensity aliasing
CN116363434A (en) Mode identification and positioning method based on distributed optical fiber sensing
CN110751201A (en) SAR equipment task failure cause reasoning method based on textural feature transformation
CN112964938B (en) Lightning single-station positioning method, device and system based on artificial intelligence
US11507803B2 (en) System for generating synthetic digital data for data multiplication
CN115393693A (en) Sequential UWB-IR image vehicle target identification method based on ICRN
CN115085831A (en) Unmanned aerial vehicle remote control signal identification system and method based on mixed time-frequency analysis
CN105389794A (en) Synthetic aperture radar (SAR) target detection false alarm elimination method based on priori scene knowledge
CN111368823A (en) Pointer instrument reading identification method and device
Lu et al. A lockable abnormal electromagnetic signal joint detection algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Building B7, Lugu enterprise Plaza, 27 Wenxuan Road, high tech Zone, Changsha City, Hunan Province, 410205

Applicant after: Huanuo Xingkong Technology Co.,Ltd.

Address before: Building B7, Lugu enterprise Plaza, 27 Wenxuan Road, high tech Zone, Changsha City, Hunan Province, 410205

Applicant before: Hunan Huanuo Xingkong Electronic Technology Co.,Ltd.

Address after: Building B7, Lugu enterprise Plaza, 27 Wenxuan Road, high tech Zone, Changsha City, Hunan Province, 410205

Applicant after: Hunan Huanuo Xingkong Electronic Technology Co.,Ltd.

Address before: Building B7, Lugu enterprise Plaza, 27 Wenxuan Road, high tech Zone, Changsha City, Hunan Province, 410205

Applicant before: HUNAN NOVASKY ELECTRONIC TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant