CN116451118A - Deep learning-based radar photoelectric outlier detection method - Google Patents

Deep learning-based radar photoelectric outlier detection method Download PDF

Info

Publication number
CN116451118A
CN116451118A CN202310422057.XA CN202310422057A CN116451118A CN 116451118 A CN116451118 A CN 116451118A CN 202310422057 A CN202310422057 A CN 202310422057A CN 116451118 A CN116451118 A CN 116451118A
Authority
CN
China
Prior art keywords
feature matrix
matrix
constructing
deconvolution
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310422057.XA
Other languages
Chinese (zh)
Other versions
CN116451118B (en
Inventor
李洁
代睿
李宇航
汪文豪
何立火
高新波
路文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202310422057.XA priority Critical patent/CN116451118B/en
Publication of CN116451118A publication Critical patent/CN116451118A/en
Application granted granted Critical
Publication of CN116451118B publication Critical patent/CN116451118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/02Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S13/00
    • G01S7/40Means for monitoring or calibrating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • G06N3/0442Recurrent networks, e.g. Hopfield networks characterised by memory or gating, e.g. long short-term memory [LSTM] or gated recurrent units [GRU]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Computational Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Pure & Applied Mathematics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Mathematical Optimization (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Mathematical Analysis (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Remote Sensing (AREA)
  • Databases & Information Systems (AREA)
  • Algebra (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a radar photoelectric outlier detection method based on deep learning, which comprises the following steps of; step 1, acquiring point trace data acquired by a radar and a photoelectric sensor, and constructing a training data set and a testing data set; step 2, constructing a convolution extraction feature matrix module; step 3, constructing a convolution encoder module; step 4, constructing a convolution-gating cyclic neural network module based on an attention mechanism, and extracting time characteristics; step 5, deconvolution decoding to reconstruct the feature matrix; obtaining a reconstructed feature matrix; step 6, constructing an error analysis module to obtain an error loss function; step 7, constructing a model training module; step 8, constructing a model verification module; and 7, selecting a model with the minimum loss function in the training process in the step 7, and verifying the model on the test set. According to the invention, the time and space characteristics of the trace point data are extracted, and a multi-scale abnormal value detection model is constructed, so that the purpose of improving the abnormal value detection accuracy is achieved.

Description

Deep learning-based radar photoelectric outlier detection method
Technical Field
The invention belongs to the technical field of radar photoelectric outlier detection, and particularly relates to a radar photoelectric outlier detection method based on deep learning.
Background
The photoelectric and radar are used alone as two sensors when detecting an object. And detecting three key characteristics of the distance, azimuth angle and pitch angle of the same target after space-time registration. The photoelectric sensor is easily affected by weather and illumination factors, so that the probability of abnormal values in the process of measuring the target distance is high. Radar ranging is relatively accurate, but outliers tend to occur when measuring angles. Therefore, detecting the abnormal value of a certain characteristic of the sensor at a certain moment is of great significance to the characteristic selection of photoelectricity and radar and the sensor fusion when the target is positioned.
The existing sensor outlier detection algorithm is mainly divided into a traditional method and a method based on deep learning. Conventional methods such as principal component analysis, clustering, and the like. The method does not need to annotate data and training, and can be directly applied to abnormality detection of the data. However, because the parameters are relatively fixed and the model lacks self-adaptability, the semantic features cannot be captured and the nonlinear relation between the features can be acquired, so that the detection result has lower precision. In recent years, many outlier detection methods based on deep learning have been proposed for time series. Early approaches were mostly directed to a single sensor, and could not extract features between different sensors. And the supervised deep learning method depends on the labeling data, so that the method is difficult to be applied to large-scale unlabeled data. The data detected by the radar and the photoelectricity are unlabeled time series data, the data size is large, the abnormal value duty ratio is low, and abnormal conditions with different degrees need to be considered. The existing method cannot completely solve the problems, and accurate detection results are difficult to obtain.
The application publication number CN113538974A, named as a multi-source data fusion-based flight target abnormality detection method, discloses a multi-source data fusion-based flight target abnormality detection method, and solves the problem that in the prior art, the accuracy and reliability of a flight target abnormality detection method independently relying on ADS-B data are low. However, it has the disadvantage that a large number of labels need to be made on the data when constructing the training data set. And a large amount of manpower and material resources are consumed, the complexity of radar obtaining the target cannot be determined, and the accuracy is higher as the number of radar obtaining position points is larger. But the greater the number, the longer the time interval, the less favorable the anomaly detection.
The application publication number is CN115510998A, the patent application entitled transaction outlier detection method and device discloses a self-encoder model based on a long-short-term memory network, user transaction data to be detected is input into the trained long-short-term memory network self-encoder model, and a reconstruction time sequence corresponding to the user transaction data to be detected is obtained; and determining whether the user transaction data to be tested is abnormal or not according to the reconstruction time sequence and the error threshold value corresponding to the user transaction data to be tested. However, it has the disadvantage that multi-source data is not utilized, only a single source of data is considered. The multi-source data features cannot be fused.
The existing abnormal value detection method based on deep learning is mostly a supervised method, depends on a manual annotation data set and cannot be directly applied to a label-free data set. In addition, the existing outlier detection method often considers a single sensor, and cannot capture the characteristics among multiple sensors. In the feature extraction, only the time feature is often considered, and the space feature is ignored.
Disclosure of Invention
In order to overcome the problems in the prior art, the invention aims to provide a radar photoelectric outlier detection method based on deep learning, and a multi-scale outlier detection model is constructed by extracting time and space characteristics of trace point data so as to achieve the aim of improving outlier detection accuracy.
In order to achieve the above purpose, the technical scheme adopted by the invention is as follows:
a radar photoelectric outlier detection method based on deep learning comprises the following steps;
step 1, acquiring point trace data acquired by a radar and a photoelectric sensor, and constructing a training data set and a testing data set:
step 2, constructing a convolution extraction feature matrix module, sampling point trace data through a time interval g, and at the ith sampling point moment t i Constructing a convolution extraction feature matrix
Step 3, constructing a convolution encoder module, feature matrix extraction by convolutionTo obtain a space feature matrix F 2
Step 4, constructing a convolution-gating cyclic neural network module based on an attention mechanism, and extracting time characteristics;
step 5, deconvolution decoding to reconstruct the feature matrix; using the spatial feature matrix F obtained in step 3 2 And step 4, carrying out deconvolution decoding on the hidden layer state to obtain a reconstructed feature matrix
Step 6, constructing an error analysis module, and calculating a reconstructed feature matrix in three dimensionsExtracting feature matrix with convolution>Error of (2); obtaining an error loss function;
step 7, constructing a model training module; parameters such as learning rate and the like are set, an Adam optimizer is adopted to optimize a loss function, data in the training process is label-free data, but abnormal points are required to be ensured to ensure the effectiveness of the model;
and 8, constructing a model verification module, selecting a model with the minimum loss function in the training process of the step 7, and verifying the model on the test set.
The data set in the step 1 is constructed according to detection data of the radar and the photoelectric sensor on the same target at the same moment; and selecting six groups of point trace data of the radar and the photoelectric distance, pitch angle and azimuth angle at the same moment as original data to construct a training set and a testing set, wherein the training set only needs to contain no abnormal value and does not need to label data.
The step 2 specifically includes:
step 2.1 for raw sensor sequence X (u) U=0, 1..5 samples were taken at intervals g, where X (u) Distance acquired by a radar photoelectric sensor six groups of sequences of azimuth angles and pitch angles;
step 2.2, at the ith sample point time t i Constructing a feature matrix with the size of 6 multiplied by 3Constructing a multi-scale feature matrix, at t i At this point, the feature matrix under different scales is calculated, and if the multi-scale window length w= {10,30,50}, the feature matrix +.>At window length W i The calculation formula of the elements in the ith row and the nth column in the corresponding matrix is as follows:
wherein the method comprises the steps ofIs that ti-k The value corresponding to the time sensor u.
The step 3 specifically comprises the following steps:
step 3.1: matrix the featuresDimension lifting is carried out through convolution to obtain a space feature matrix F 0
Step 3.2: using full convolution for F 0 Extracting the spatial features of the matrix to obtain a spatial feature matrix F 1
Step 3.3: using full convolution for F 1 Extracting the spatial features of the matrix to obtain a spatial feature matrix F 2
The step 4 specifically comprises the following steps:
step 4.1, the spatial feature F obtained in the convolution encoding process 0 ,F 1 And F 2 The output of the reset gate is obtained according to the input and the state of the hidden layer firstly: r is R t =σ(W xr *X t +W hr *H t-1 ) Wherein σ is a sigmoid function, W xr And W is hr Weight parameters that need to be learned for the network. X is X t To input parameters H t-1 Outputting the hidden layer at the previous moment;
step 4.2, updating the gate to obtain the result Z of updating the gate according to the input and hidden layer states t =σ(W xz *X t +W hz *H t-1 ),Wherein W is xz ,W hh ,W xh Parameters to be learned are network;
step 4.3, selecting the states of the first three time steps to obtain long-term dependence, and dividing by the attention mechanismDifferent weights are matched; : the attention weight calculation formula is as follows:
step 4.4 weight attention alpha i Multiplying the hidden layer to obtain the time characteristic of the t-th time step of the first layer
Step 4.5 Using spatial characteristics F 0 ,F 1 And F 2 Repeating steps 4.1-4.4 as input to obtain corresponding time characteristics
The step 5 specifically comprises the following steps:
step 5.1, time characterizationDeconvolution to obtain deconvolution feature matrix F 3
Step 5.2, deconvolution of the feature matrix F 3 And time characteristicsSplicing in the third dimension to obtain a deconvolution feature matrix F 4
Step 5.3, deconvolution of the feature matrix F 4 Deconvolution is carried out to obtain a deconvolution feature matrix F 5
Step 5.4, deconvolution of the feature matrix F 5 And time characteristicsSplicing in the third dimension to obtain a deconvolution feature matrix F 6
Step 5.5, deconvolution of the feature matrix F 6 Deconvolution is carried out to obtain a deconvolution feature matrix F 7 I.e. reconstructed feature matrix
The step 6 specifically comprises the following steps:
step 6.1, defining window length as W 0 Is a mean square error loss function of (2)Comparing differences before and after reconstruction of the feature matrix, where m i,j,0 Extracting feature matrix for convolution>Elements with medium index (i, j, 0),reconstruction feature matrix->Elements with medium index (i, j, 0);
step 6.2, defining window length as W 1 Is a loss function of (2)Wherein m is i,j,1 For the original feature matrix->Element with middle index (i, j,1,) the index (i, j, 1)>Reconstruction feature matrix->Elements with medium index (i, j, 1);
step 6.3, defining window length as W 2 Is a loss function of (2)Wherein m is i,j,2 For the original feature matrix->Element with middle index (i, j,2,) the index (i, j, 2)>Reconstruction feature matrix->Elements with medium index (i, j, 2);
step 6.4, defining a total model multiscale error Loss function loss=loss 0 +Loss 1 +Loss 2
The step 8 specifically comprises the following steps:
step 8.1, constructing a reconstruction error matrix; reconstructed feature matrix outputting modelElement->And the original feature matrix->Subtracting the corresponding positions of the middle elements m and squaring to obtain a reconstructed error matrix E; in a specific calculation process, element e in error matrix Ez with index (i, j, k) i,j,k The calculation formula of (2) is as follows:
step 8.2, counting the number exceeding a threshold value T in the reconstruction error matrix, and assigning the element value exceeding the threshold value T to be 1 and the rest to be 0 to obtain an outlier scoring matrix S;
step 8.3, judging whether the abnormal value points are abnormal value points or not by counting whether the number of elements of 1 in the abnormal value scoring matrix S exceeds a threshold G obtained in a training set;
wherein n is the number of sensors, s i,j,k Is the element in the matrix S indexed (i, j, k).
The invention has the beneficial effects that:
the method extracts spatial features by convolutional coding of radar photoelectric sensor data, extracts time features by a convolutional-gating cyclic neural network module, and finally reconstructs and calculates an error scoring matrix. An outlier detection model with a multi-scale and multi-sensor is constructed. By utilizing the time-space information of the multi-scale multi-sensor, the time characteristics and the space characteristics between the sensors and inside the sensors are utilized more accurately, the abnormal value of the sensor when detecting the target can be detected accurately, and higher abnormal value detection precision is realized.
Drawings
FIG. 1 is a schematic flow chart of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples.
As shown in fig. 1: the invention is realized by adopting a radar photoelectric outlier detection model based on deep learning. The radar photoelectric outlier detection model based on deep learning comprises a feature matrix construction module, a convolution encoder module, a convolution loop gating neural network module, a deconvolution decoder module and a final outlier detection module.
The feature matrix construction module is mainly used for completing the sampling of sensor data and the construction of an original feature matrix. The convolution encoder module mainly completes the spatial feature extraction of the data, and the convolution loop gating neural network module completes the temporal feature extraction by adopting an attention mechanism. And deconvolution decoding, namely fusing the time features and the space features and completing the reconstruction of the feature matrix. The anomaly detection module is used for detecting the anomaly value in the data.
Step 1, acquiring point trace data acquired by a radar and a photoelectric sensor, and constructing a training data set and a testing data set:
in the example, the data set is constructed according to detection data of the radar and the photoelectric sensor on the same target at the same moment. The sampling interval of the sensor is 5ms, and 50000 time data are obtained through single scanning. And selecting six groups of point trace data of the radar and photoelectric distance, pitch angle and azimuth angle at the same moment as original data. And (2): 3 to construct training and test sets. Meanwhile, the characteristics of the network are considered, the training set only needs to contain no abnormal value, and no data need to be marked.
Step 2, constructing a feature matrix module, sampling the original data at the ith sampling point time t through a time interval g i Constructing a feature matrix with the size of 6 multiplied by 3
Step 2.1 for raw sensor sequence X (u) U=0, 1..5 samples were taken at intervals g, where X (u) The method is characterized by comprising six groups of sequences of distance, azimuth angle and pitch angle acquired by the radar photoelectric sensor.
Step 2.2, at the ith sample point time t i Constructing a feature matrix with the size of 6 multiplied by 3Constructing a multi-scale feature matrix, at t i The feature matrix at different scales is calculated. The multiscale window length w= {10,30,50}, then the feature matrix +.>At window length W i The calculation formula of the elements in the ith row and the nth column in the corresponding matrix is as follows:
wherein the method comprises the steps ofIs that ti-k The value corresponding to the time sensor u.
And 3, constructing a convolutional encoder module, and extracting the spatial features of the feature matrix through convolution. Obtaining a spatial feature F:
step 3.1: matrix the featuresDimension lifting is carried out through convolution, the convolution kernel size is 1 multiplied by 1, the step length is 1 multiplied by 1, the output channel number is 16, and a space feature matrix F is obtained 0
Step 3.2: using full convolution for F 0 The spatial features of the (2) are extracted, the convolution kernel is 2 multiplied by 2, the step length is 2 multiplied by 2, the output channel number is 32, and a spatial feature matrix F is obtained 1
Step 3.3: using full convolution for F 1 The spatial features of the (2) are extracted, the convolution kernel is 2 multiplied by 2, the step length is 2 multiplied by 2, the output channel number is 64, and a spatial feature matrix F is obtained 2
And 4, constructing a convolution-gating cyclic neural network module based on an attention mechanism, and extracting time characteristics.
The GRU neural network belongs to a circulating neural network, and the cell structure is optimized on the basis of the LSTM neural network, so that parameters are reduced, and the training speed is increased. Conv-GRU uses convolution kernel to replace full connection layer based on GRU network to convert full connection into local connection. Meanwhile, because the importance of the states of the previous hidden layers is different, different weights are allocated by using an attention mechanism when the states are updated.
Step 4.1, the spatial feature F obtained in the convolution encoding process 0 ,F 1 And F 2 As inputs to the Conv-GRU network, respectively, the output of the reset gate is first obtained from the input and the state of the hidden layer: r is R t =σ(W xr *X t +W hr *H t-1 ) Wherein σ is a sigmoid function, W xr And W is hr Weight parameters that need to be learned for the network. X is X t For input ofParameters, H t-1 Outputting the hidden layer at the previous moment;
step 4.2, updating the gate to obtain the result Z of updating the gate according to the input and hidden layer states t =σ(W xz *X t +W hz *H t-1 )。H′ t =f(W xh *X t +R t °(W hh *H t-1 ) A) is provided; wherein W is xz ,W hh ,W xh Parameters to be learned are network;
in the step 4.3 of the method,
the states of the first three time steps are selected to acquire long-term dependence, and different weights are distributed through an attention mechanism; : the attention weight calculation formula is as follows:
step 4.4 weight attention alpha i Multiplying the hidden layer to obtain the time characteristic of the t-th time step of the first layer
Step 4.5 Using spatial characteristics F 0 ,F 1 And F 2 Repeating steps 4.1-4.4 as input to obtain corresponding time characteristicsAnd 5, deconvolution decoding to reconstruct the feature matrix. Deconvolution decoding is performed using the convolutional encoding result in step 3 and the hidden layer state of step 4:
step 5.1, time characterizationDeconvolution, the convolution kernel size is 2×2, the step length is 2×2, and the output channel number is 32, so as to obtain deconvolution feature matrix F 3
Step 5.2, deconvolution of the feature matrix F 3 And time characteristicsSplicing in the third dimension to obtain a deconvolution feature matrix F 4
Step 5.3, deconvolution of the feature matrix F 4 Deconvolution is carried out, the convolution kernel size is 2 multiplied by 2, the step length is 2 multiplied by 2, the output channel number is 16, and the deconvolution feature matrix F is obtained 5
Step 5.4, deconvolution of the feature matrix F 5 And time characteristicsSplicing in the third dimension to obtain a deconvolution feature matrix F 6
Step 5.5, deconvolution of the feature matrix F 6 Deconvolution is carried out, the convolution kernel size is 2 multiplied by 2, the step length is 1 multiplied by 1, the output channel number is 3, and the deconvolution feature matrix F is obtained 7 I.e. reconstructed feature matrix
Step 6, constructing an error analysis module, and calculating a reconstructed feature matrix in three dimensionsAnd the original characteristic matrixIs a function of the error of (a).
Step 6.1, defining window length as W 0 Is a mean square error loss function of (2)Comparing differences before and after reconstruction of the feature matrix, where m i,j,0 For the original feature matrix->Middle indexElement (i, j, 0),/->Reconstruction feature matrix->Elements with medium index (i, j, 0);
step 6.2, defining window length as W 1 Is a loss function of (2)Wherein m is i,j,1 For the original feature matrix->Element with middle index (i, j,1,) the index (i, j, 1)>Reconstruction feature matrix->Elements with medium index (i, j, 1);
step 6.3, defining window length as W 2 Is a loss function of (2)Wherein m is i,j,2 For the original feature matrix->Element with middle index (i, j,2,) the index (i, j, 2)>Reconstruction feature matrix->Elements with medium index (i, j, 2);
step 6.4, defining a total model multiscale error Loss function loss=loss 0 +Loss 1 +Loss 2
And 7, constructing a model training module. Parameters such as learning rate and the like are set, an Adam optimizer is adopted for training the model, and the loss function definition refers to step 6.4. The data in the process of training the model is label-free data, but no abnormal point needs to be ensured to ensure the validity of the model.
And 8, constructing a model verification module, selecting a model with the minimum loss function in the training process of the step 7, and verifying the model on the test set.
Step 8.1, constructing a reconstruction error matrix; reconstructed feature matrix outputting modelElement->And the original feature matrix->Subtracting the corresponding positions of the middle elements m and squaring to obtain a reconstructed error matrix E; in a specific calculation process, element e in error matrix Ez with index (i, j, k) i,j,k The calculation formula of (2) is as follows:
step 8.2, counting the number exceeding a threshold value T (the threshold value T is set to be 0.05 in the example) in the reconstruction error matrix, and assigning the element value exceeding the threshold value T to be 1 and the rest to be 0 to obtain an outlier scoring matrix S;
in step 8.3 of the method, judging whether the abnormal value points are the abnormal value points or not by counting whether the number of elements of 1 in the abnormal value scoring matrix S exceeds a threshold G obtained in a training set;
wherein n is the number of sensors, s i,j,k Index into matrix SElements of (i, j, k).
And 9, performing a comparison experiment on the simulation platform.
The simulation experiment platform of the invention is as follows: the processor is an Intel (R) Core (TM) i9-12900KF CPU, the main frequency is 3.1GH, the memory is 64GB, and the display card is NVIDIA GeForce RTX 3090. The experiment is carried out on a self-built radar photoelectric data set by selecting two comparison methods. A classical differential integrated moving average autoregressive model (ARIMA) and a long-term short-term memory network (LSTM-ED) based on a deep learning encoder decoder structure were chosen as a comparison method. The evaluation indexes of the experiment are the accuracy rate, the recall rate and the F1 score. From the results, it can be found that the method of the present invention is superior to the existing methods in various aspects.
The invention provides a radar photoelectric outlier detection method based on deep learning, which is used for solving the problems that most of the prior art is a supervised method, is difficult to apply to a non-tag data set, does not consider multi-sensor multi-scale feature extraction and cannot combine data time features and space features by constructing a feature matrix and based on a coder-decoder structure and extracting time features of a convolutional-gating cyclic neural network of an attention mechanism. Finally, abnormal characteristics of the radar and the photoelectric sensor at a certain moment are detected, so that references are provided for subsequent characteristic selection or fusion of the photoelectric sensor and the radar.

Claims (8)

1. The method for detecting the photoelectric outlier of the radar based on deep learning is characterized by comprising the following steps of;
step 1, acquiring point trace data acquired by a radar and a photoelectric sensor, and constructing a training data set and a testing data set:
step 2, constructing a convolution extraction feature matrix module, sampling point trace data through a time interval g, and at the ith sampling point moment t i Constructing a convolution extraction feature matrix
Step 3, constructing a convolutional encoder module, and extracting a feature matrix through convolutionTo obtain a space feature matrix F 2
Step 4, constructing a convolution-gating cyclic neural network module based on an attention mechanism, and extracting time characteristics;
step 5, deconvolution decoding to reconstruct the feature matrix; using the spatial feature matrix F obtained in step 3 2 And step 4, carrying out deconvolution decoding on the hidden layer state to obtain a reconstructed feature matrix
Step 6, constructing an error analysis module, and calculating a reconstructed feature matrix in three dimensionsAnd convolution extracting feature matrixError of (2); obtaining an error loss function;
step 7, constructing a model training module; setting learning rate and other parameters, and optimizing a loss function by adopting an Adam optimizer;
and 8, constructing a model verification module, selecting a model with the minimum loss function in the training process of the step 7, and verifying the model on the test set.
2. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 1, wherein the data set in the step 1 is constructed according to detection data of the radar and the photoelectric sensor on the same target at the same moment; and selecting six groups of point trace data of the radar and the photoelectric distance, pitch angle and azimuth angle at the same moment as original data to construct a training set and a testing set, wherein the training set only needs to contain no abnormal value and does not need to label data.
3. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 1, wherein the step 2 specifically comprises:
step 2.1 for raw sensor sequence X (u) U=0, 1..5 samples were taken at intervals g, where X (u) Six groups of sequences are the distance, azimuth angle and pitch angle acquired by the radar photoelectric sensor;
step 2.2, at the ith sample point time t i Constructing a feature matrix with the size of 6 multiplied by 3Constructing a multi-scale feature matrix, at t i At this point, the feature matrix under different scales is calculated, and if the multi-scale window length w= {10,30,50}, the feature matrix +.>At window length W i The calculation formula of the elements in the ith row and the nth column in the corresponding matrix is as follows:
wherein the method comprises the steps ofAt t i -the value corresponding to sensor u at time k.
4. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 1, wherein the step 3 is specifically:
step 3.1: extracting feature matrix from convolutionDimension lifting is carried out through convolution to obtain a space feature matrix F 0
Step 3.2: using full convolution for F 0 Extracting the spatial features of the matrix to obtain a spatial feature matrix F 1
Step 3.3: using full convolution for F 1 Extracting the spatial features of the matrix to obtain a spatial feature matrix F 2
5. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 4, wherein the step 4 is specifically:
step 4.1, the spatial feature F obtained in the convolution encoding process 0 ,F 1 And F 2 The output of the reset gate is obtained according to the input and the state of the hidden layer firstly: r is R t =σ(W xr *X t +W hr *H t-1 ) Wherein σ is a sigmoid function, W xr And W is hr Weight parameters that need to be learned for the network. X is X t To input parameters H t-1 Outputting the hidden layer at the previous moment;
step 4.2, updating the gate to obtain the result Z of updating the gate according to the input and hidden layer states t =σ(W xz *X t +W hz *H t-1 ),Wherein W is xz ,W hh ,W xh Parameters to be learned are network;
step 4.3, selecting the states of the first three time steps to acquire long-term dependence, and distributing different weights through an attention mechanism; : the attention weight calculation formula is as follows:
step 4.4 weight attention alpha i Multiplying by hidden layerObtaining the time characteristics of the t time step of the first layer
Step 4.5 Using spatial characteristics F 0 ,F 1 And F 2 Repeating steps 4.1-4.4 as input to obtain corresponding time characteristics
6. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 5, wherein the step 5 is specifically:
step 5.1, time characterizationDeconvolution to obtain deconvolution feature matrix F 3
Step 5.2, deconvolution of the feature matrix F 3 And time characteristicsSplicing in the third dimension to obtain a deconvolution feature matrix F 4
Step 5.3, deconvolution of the feature matrix F 4 Deconvolution is carried out to obtain a deconvolution feature matrix F 5
Step 5.4, deconvolution of the feature matrix F 5 And time characteristicsSplicing in the third dimension to obtain a deconvolution feature matrix F 6
Step 5.5, deconvolution of the feature matrix F 6 Deconvolution is carried out to obtain a deconvolution feature matrix F 7 I.e. reconstructed feature matrix
7. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 6, wherein the step 6 is specifically:
step 6.1, defining window length as W 0 Is a mean square error loss function of (2)Comparing differences before and after reconstruction of the feature matrix, where m i,j,0 Extracting feature matrix for convolution>Element with middle index (i, j,0,) the index (i, j, 0)>Reconstruction feature matrix->Elements with medium index (i, j, 0);
step 6.2, defining window length as W 1 Is a loss function of (2)Wherein m is i,j,1 For the original feature matrix->Element with middle index (i, j,1,) the index (i, j, 1)>Reconstruction feature matrix->Elements with medium index (i, j, 1);
step 6.3, defining window length as W 2 Is a loss function of (2)Wherein m is i,j,2 For the original feature matrix->Element with middle index (i, j,2,) the index (i, j, 2)>Reconstruction feature matrix->Elements with medium index (i, j, 2);
step 6.4, defining a total model multiscale error Loss function loss=loss 0 +Loss 1 +Loss 2
8. The method for detecting the photoelectric outlier of the radar based on the deep learning according to claim 7, wherein the step 8 is specifically:
step 8.1, constructing a reconstruction error matrix; reconstructed feature matrix outputting modelElement->And the original characteristic matrixSubtracting the corresponding positions of the middle elements m and squaring to obtain a reconstructed error matrix E; in a specific calculation process, element e in error matrix Ez with index (i, j, k) i,j,k The calculation formula of (2) is as follows:
step 8.2, counting the number exceeding a threshold value T in the reconstruction error matrix, and assigning the element value exceeding the threshold value T to be 1 and the rest to be 0 to obtain an outlier scoring matrix S;
step 8.3, judging whether the abnormal value points are abnormal value points or not by counting whether the number of elements of 1 in the abnormal value scoring matrix S exceeds a threshold G obtained in a training set;
wherein n is the number of sensors, s i,j,k Is the element in the matrix S indexed (i, j, k).
CN202310422057.XA 2023-04-19 2023-04-19 Deep learning-based radar photoelectric outlier detection method Active CN116451118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310422057.XA CN116451118B (en) 2023-04-19 2023-04-19 Deep learning-based radar photoelectric outlier detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310422057.XA CN116451118B (en) 2023-04-19 2023-04-19 Deep learning-based radar photoelectric outlier detection method

Publications (2)

Publication Number Publication Date
CN116451118A true CN116451118A (en) 2023-07-18
CN116451118B CN116451118B (en) 2024-01-30

Family

ID=87121629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310422057.XA Active CN116451118B (en) 2023-04-19 2023-04-19 Deep learning-based radar photoelectric outlier detection method

Country Status (1)

Country Link
CN (1) CN116451118B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150407A (en) * 2023-09-04 2023-12-01 国网上海市电力公司 Abnormality detection method for industrial carbon emission data

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013090910A2 (en) * 2011-12-15 2013-06-20 Northeastern University Real-time anomaly detection of crowd behavior using multi-sensor information
CN110442600A (en) * 2019-04-17 2019-11-12 江苏网谱数据服务有限公司 A kind of time series method for detecting abnormality
CN111726351A (en) * 2020-06-16 2020-09-29 桂林电子科技大学 Bagging-improved GRU parallel network flow abnormity detection method
CN112532439A (en) * 2020-11-24 2021-03-19 山东科技大学 Network flow prediction method based on attention multi-component space-time cross-domain neural network model
CN113219493A (en) * 2021-04-26 2021-08-06 中山大学 End-to-end point cloud data compression method based on three-dimensional laser radar sensor
CN113255835A (en) * 2021-06-28 2021-08-13 国能大渡河大数据服务有限公司 Hydropower station pump equipment anomaly detection method
CN113327022A (en) * 2021-05-18 2021-08-31 重庆莱霆防雷技术有限责任公司 Lightning protection safety risk management system and method
CN113705424A (en) * 2021-08-25 2021-11-26 浙江工业大学 Performance equipment fault diagnosis model construction method based on time convolution noise reduction network
US20220230731A1 (en) * 2019-05-30 2022-07-21 Acerar Ltd. System and method for cognitive training and monitoring
CN115293280A (en) * 2022-08-17 2022-11-04 西安交通大学 Power equipment system anomaly detection method based on space-time feature segmentation reconstruction

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013090910A2 (en) * 2011-12-15 2013-06-20 Northeastern University Real-time anomaly detection of crowd behavior using multi-sensor information
CN110442600A (en) * 2019-04-17 2019-11-12 江苏网谱数据服务有限公司 A kind of time series method for detecting abnormality
US20220230731A1 (en) * 2019-05-30 2022-07-21 Acerar Ltd. System and method for cognitive training and monitoring
CN111726351A (en) * 2020-06-16 2020-09-29 桂林电子科技大学 Bagging-improved GRU parallel network flow abnormity detection method
CN112532439A (en) * 2020-11-24 2021-03-19 山东科技大学 Network flow prediction method based on attention multi-component space-time cross-domain neural network model
CN113219493A (en) * 2021-04-26 2021-08-06 中山大学 End-to-end point cloud data compression method based on three-dimensional laser radar sensor
CN113327022A (en) * 2021-05-18 2021-08-31 重庆莱霆防雷技术有限责任公司 Lightning protection safety risk management system and method
CN113255835A (en) * 2021-06-28 2021-08-13 国能大渡河大数据服务有限公司 Hydropower station pump equipment anomaly detection method
CN113705424A (en) * 2021-08-25 2021-11-26 浙江工业大学 Performance equipment fault diagnosis model construction method based on time convolution noise reduction network
CN115293280A (en) * 2022-08-17 2022-11-04 西安交通大学 Power equipment system anomaly detection method based on space-time feature segmentation reconstruction

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
SHI YONG等: "Robust deep auto-encoding network for real-time anomaly detection at nuclear power plants", 《PROCESS SAFETY AND ENVIRONMENTAL PROTECTION》, vol. 163, pages 438 - 452 *
SHUN GAN等: "Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes", 《 IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS》, vol. 23, no. 11, pages 20912 - 20925, XP011926602, DOI: 10.1109/TITS.2022.3177640 *
YIFENG TAN等: "Multivariate Time-Series Anomaly Detection in IoT Using Attention-Based Gated Recurrent Unit", 《2022 14TH INTERNATIONAL CONFERENCE ON WIRELESS COMMUNICATIONS AND SIGNAL PROCESSING (WCSP)》, pages 604 - 609 *
ZETONG YANG等: "3D-MAN: 3D Multi-frame Attention Network for Object Detection", 《2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》, pages 1863 - 1872 *
彭洋: "基于深度神经网络GRU的并行网络流量异常检测方法研究", no. 02, pages 139 - 56 *
柳月强等: "基于时空相关性的多传感器数据异常检测", 《计算机应用与软件》, vol. 37, no. 10, pages 85 - 90 *
罗俊海: "无人机探测与对抗技术发展及应用综述", 《控制与决策》, vol. 37, no. 3, pages 530 - 544 *
郑育靖等: "基于GRU-Attention的无监督多变量时间序列异常检测", 《山西大学学报(自然科学版)》, vol. 43, no. 04, pages 756 - 764 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117150407A (en) * 2023-09-04 2023-12-01 国网上海市电力公司 Abnormality detection method for industrial carbon emission data

Also Published As

Publication number Publication date
CN116451118B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
Zhang et al. Constructing a PM2. 5 concentration prediction model by combining auto-encoder with Bi-LSTM neural networks
CN110020623B (en) Human body activity recognition system and method based on conditional variation self-encoder
CN110070074B (en) Method for constructing pedestrian detection model
CN113312447B (en) Semi-supervised log anomaly detection method based on probability label estimation
CN116451118B (en) Deep learning-based radar photoelectric outlier detection method
CN108197743A (en) A kind of prediction model flexible measurement method based on deep learning
CN115688035A (en) Time sequence power data anomaly detection method based on self-supervision learning
CN111122162B (en) Industrial system fault detection method based on Euclidean distance multi-scale fuzzy sample entropy
US11816556B1 (en) Method for predicting air quality index (AQI) based on a fusion model
CN115220133B (en) Rainfall prediction method, device and equipment for multiple meteorological elements and storage medium
CN113642255A (en) Photovoltaic power generation power prediction method based on multi-scale convolution cyclic neural network
CN116204770B (en) Training method and device for detecting abnormality of bridge health monitoring data
CN117056874B (en) Unsupervised electricity larceny detection method based on deep twin autoregressive network
CN115169430A (en) Cloud network end resource multidimensional time sequence anomaly detection method based on multi-scale decoding
CN111343147A (en) Network attack detection device and method based on deep learning
CN116503354A (en) Method and device for detecting and evaluating hot spots of photovoltaic cells based on multi-mode fusion
CN116912660A (en) Hierarchical gating cross-Transformer infrared weak and small target detection method
CN113095386B (en) Gesture recognition method and system based on triaxial acceleration space-time feature fusion
CN114819260A (en) Dynamic generation method of hydrologic time series prediction model
CN114973019A (en) Deep learning-based geospatial information change detection classification method and system
Ye et al. A novel self-supervised learning-based anomalous node detection method based on an autoencoder for wireless sensor networks
CN114065335A (en) Building energy consumption prediction method based on multi-scale convolution cyclic neural network
CN112926016A (en) Multivariable time series change point detection method
CN117041972A (en) Channel-space-time attention self-coding based anomaly detection method for vehicle networking sensor
CN117520664A (en) Public opinion detection method and system based on graphic neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant